I've been reading the June 15 issue of Science, and I am struck by the irony of two articles. The first is a news item titled "Social scientists hope for reprieve from the senate." The U.S. House of Representatives recently voted to prohibit the NSF from funding political science research, and to reduce the scale of the American Community Survey (a census-based social survey). The bill was co-sponsored by an unenlightened congressman from my own (unenlightened) state of Arizona, Jeff Flake (jokes about "what's in a name" come to mind here). "Flake says political science isn't sufficiently rigorous to warrant federal support." Does Flake base his policies and views on rigorous research? I doubt it. Conservative politicians periodically go after the social sciences in Washington, and we should all hope that the current attack is as unsuccessful as previous ones have been. Is archaeology more or less rigorous than political science? If we use science definition 1, I think we are in bad shape. But we can always take refuge in science definition 2 ("see, we use complicated scientific technology!") to assert our rigor.
The second article in the June 15 Science was a short essay in a section called "Science for Sustainable Development." This essay, titled "Rigorous Evaluation of Human Behavior" is written by Esther Duflo, an economist at MIT. She makes the valid and important point that the role of science in promoting sustainable development and alleviating poverty should include social scientific studies of behavior. I wonder what Jeff Flake would think about this. When I saw the title I was encouraged, but then I got to the heart of Duflo's essay: the way to conduct "rigorous" studies of human behavior is to use randomized controlled trials. "This makes for good science: these experiments make it possible to test scientific hypotheses with a degree of rigor that was not available before."
In some fields of social science and public health, the randomized controlled trial (RTC) has become the supposed "gold standard" of research methods, proclaimed to be far superior to other approaches. Apart from the fact that we simply cannot do RCT's in archaeology (except perhaps in a few very limited situations that I can't think of offhand), I must admit that I am more supportive of the growing critique and contextualization of RCTs in social science. RCT is a narrow approach that achieves internal rigor at the expense of external relevance and validity. Philosopher of science Nancy Cartwright puts it this way, using economics to illustrate the trade-off of internal rigor and external validity:
“Economists make a huge investment to achieve
rigor inside their models, that is to
achieve internal validity. But how do they decide what lessons to draw about
target situations outside from conclusions rigorously derived inside the model?
That is, how do they establish external validity? We find: thought, discussion,
debate; relatively secure knowledge; past practice; good bets. But not rules, check
lists, detailed practicable procedures; nothing with the rigor demanded inside
the models.” (Cartwright 2007:18).
Or consider a recent paper by sociologist Robert J. Sampson (2010), who promotes the value of observational research in criminology and sociology. He deflates three myths of RCTs in criminology:
Myth 1: Randomization solves the causal inference problem.
Myth 2: Experiments are assumption (theory) free.
Myth 3: Experiments are more policy relevant than observational studies.
If you want more context on internal vs. external validity, or how various social science methods relate to an experimental ideal, see Gerring (2007). He is one of those political scientists who, according to Congressman Flake, must be non-rigorous. But the message here is analogous to my views on science types 1 and 2 in archaeology. Just as archaeologists can do scientifically rigorous and valid research without involving technological methods from the hard sciences, so too can other social scientists do scientifically rigorous and valid research without the aid of formal experiments (RCTs).
Cartwright, Nancy
2007 Are RCT's the Gold Standard? BioSocieties 2(1):11-20.
Gerring, John
2007 Case Study Research: Principles and Practices. Cambridge University Press, New York.
Sampson, Robert J.
2010 Gold Standard Myths: Observations on the Experimental Turn in Quantitative Criminology. Journal of Quantitative Criminology 26(4):489-500.
Postscript--No, I don't read or keep up with the Journal of Quantitative Criminology. Robert Sampson is one of my social science heroes--someone whose research I tremendously admire, and whose methods and approaches give me inspiration (John Gerring is another). I remembered reading a passage criticizing the RCT craze in Sampson's (outstanding) 2012 book, Great American City (which is where I got the Cartwright citation). But I am in Toluca, Mexico, right now without access to my books, so I searched for "randomized controlled trials" AND "Robert J. Sampson" on Google-Scholar, and came up with his 2010 paper. My name is Mike Smith and I am a Google-Scholar addict.
No comments:
Post a Comment