Saturday, September 1, 2012

Natural experiments in archaeology

I've been wondering recently why more archaeologists don't use the method and concept of "natural experiment." In one sense, natural experiments are not uncommon in archaeology; we sometimes call them "controlled comparisons," probably borrowing that term from cultural anthropology (Eggan 1954). But we rarely use the phrase "natural experiment," which has been gaining ground in the comparative branches of the social sciences, history, and ecology. This isn't just a terminological issue; natural experiments are all about how to determine causality. Most archaeologists, however, avoid discussing causality, and this may account for the rarity of the natural experiment concept in our field.

(Postmodern, postprocessualist, and other "post" archaeologists can probably stop reading here, unless you are looking for more fodder to critique simplistic scientistic Smith).

A natural experiment is "an observational study that nonetheless has the properties of an experimental research design.” (Gerring 2007: 216). The recent collection edited by Jared Diamond and James Robinson (2010) presents a series of historical natural experiments, including one archaeological case study. In an insightful review of the book in the journal Science, James Mahoney notes:

·         “Historical analysts cannot, of course, test their ideas by running controlled experiments. They cannot randomly assign cases to treatment and control groups. But they can sometimes make a credible claim that the assignment of cases to different groups is “as if” random. The label “natural experiment” (or “quasi-experiment”) is often reserved for those studies in which this assumption seems especially plausible.”(Science vol. 357, p. 1578)

It is probably no surprise that the archaeological example in this book is by Patrick Kirch (2010), who has been doing "natural experiments" or "controlled comparisons" for years. Island societies are productive candidates for natural experiments because of their boundedness and relative isolation. In this chapter he compares Hawaii, the Marquesas and Mangaia as contrasting environments in which initially similar ancestral Polynesian societies evolved in very different directions.

Gerring 2007: 153
I won't go into details here, beyond recommending Diamond and Robinson and other works on natural experiments (Gerring 2007; Dunning 2008; Labizna 2011). But just to show how this line of methodological thinking lines up with archaeology, consider John Gerring's depiction of types of experimental design in case study research (recall that most research in archaeology follows the approach called case study research in other disciplines).

Quadrant #1 is the classic experimental design. Two populations, the treatment group and control group (spatial variation) are observed through time, before and after the "treatment" or the "perturbation" (the term used in Diamond and Robinson 2010). Quadrants 2 and 3, with only temporal or spatial variation, are relatively common in archaeology. The fourth quadrant is for studies where there is no spatial or temporal variation. This is perhaps the most common archaeological situation. You do a study, come up with some results, and then put forward an argument about what they mean: what were the dynamics, how and why did something happen, etc. But since you have made no formal comparison of either a before-and-after nature, or a spatial nature, it is very difficult to demonstrate causality. The typical course of action is make as strong an argument as one can. "That's my story and I'm stickin' to it!" But a methodologically superior method would be to find a way to make an explicit temporal and/or spatial comparison.

When there just happens to be no good comparison one can make, then you need to use counterfactual logic to make a causal claim. There is a rather large literature on formal counterfactual causality in sociology and political science (e.g., Gerring 2005, Heckman 2005, Morgan & Winship 2007). In its simplest form, making a causal claim without an experiment or comparison requires one to consider the counterfactual situation of what would have happened had the hypothesized causal agent not been present, or acted differently. One then shows that such a situation does not match reality, which gives support to the causal hypothesis.

Here is an example. I think that the ability of people (whose houses I have excavated) to cultivate cotton in Aztec-period Morelos was the main source of their economic prosperity. That is, cotton cultivation was a major cause of their prosperity. I don't have a "before and after" comparison (first no cotton, then cotton cultivation), and there are not enough comparative cases of sites where we know about cotton cultivation (presence or absence) and prosperity in enough detail. (Once some current project are complete, this situation will improve). So I explore the counterfactual situation: what would the local economy be like if they were not able to grow cotton? They would have had fewer resources to trade with other areas. Cotton textiles served as money, and thus cotton was far more valuable than other local resources such as maize, bark paper, or basalt. Also, it would have been far more difficult to come up with their taxes, assessed in cotton textiles. (And there would probably be far fewer spindle whorls in domestic middens).

Now I can come up with all sorts of plausible factors to support my causal claim about cotton and prosperity, but the argument will be much more effective once I can add some formal comparisons. For example, were people at Calixtlahuaca, where cotton was not cultivated, less prosperous? This will be a start, but more cases are needed to make a strong argument. Or perhaps I could show that prosperity declined after the Spanish conquest (it almost certainly it did, but demonstrating that is quite difficult), when historical sources tell us that irrigated cotton fields were converted to sugar cane.

In any case, these various comparative scenarios are natural experiments. We mostly do observational (case study) research in archaeology. To the extent that we can design and describe our research in quasi-experimental terms and use this approach to explore causality, our explanations will be more convincing, and scholars and others outside of archaeology will be more likely to view our discipline as an empirical scientific field with something to say about the world. Now, many archaeologists don't want our field to be scientific. They want to use fashionable high-level theory to interpret the past, without the constraints of scientific methods. That is fine for some purposes, but if we want anyone outside of the humanities to pay attention to us and find something of value in our research, then we need to do all we can to beef up our methods in a scientific direction. We need to pursue science #1 (epistemological science) and just just science #2 (jazzy technical methods). And natural experiments are one way to do this.

Diamond, Jared and James A. Robinson (editors)
2010    Natural Experiments of History. Harvard University Press, Cambridge.

Dunning, Thad
2008    Improving Causal Inference: Strengths and Limitations of Natural Experiments. Political Research Quarterly 61: 282-293

Eggan, Fred
1954    Social Anthropology and the Method of Controlled Comparison. American Anthropologist 56:743-763.

Gerring, John
2005    Causation: A Unified Framework for the Social Sciences. Journal of Theoretical Politics 17: 163-198.

Gerring, John
2007    Case Study Research: Principles and Practices. Cambridge University Press, New York.

Heckman, James J.
2005    The Scientific Model of Causality. Sociological Methodology 35: 1-97.

Kirch, Patrick V.
2010    Controlled Comparison and Polynesian Cultural Evolution. In Natural Experiments of History, edited by Jared Diamond and James A. Robinson, pp. 15-52. Harvard University Press, Cambridge.

Labzina, Elena
2011    No Free Lunch: Costs and Benefits of Using the Concept of Natural Experiments in Political Science. M.A. thesis, Department of Political Sciences, Central European University.

Morgan, Stephen L. and Christopher Winship
2007    Counterfacturals and Causal Inference: Methods and Principles for Social Research. Cambridge University Press, New York.


p9 said...

Patrick Kirch is forever in my good books, and I've loved everything he has ever written. One of favourite articles was about using ethnographic data from eastern Indonesia to understand archaeological material from Polynesia. Temples in parts of eastern Indonesia are often old houses, consecrated by time and the burial of the inhabitants beneath the structure. Kirch showed that this was a tradition carried on by the earliest inhabitants of east Polynesia as well, finding domestic middens next to or beneath the earliest temple sites and examining the vocabulary relating to both houses and temples across Austronesia. Brilliant anthropological sleuthing.

I haven't yet read this volume, but it's been on amazon wishlist for some time. Thank you for reminding me of its existence - I think I'll give it a shot.

diätplan said...

very good comment

Anonymous said...

A natural experiment is "an observational study that nonetheless has the properties of an experimental research design.”


Michael E. Smith said...

Well, I'm glad that I can generate a laugh here. I can't buy a laugh in my 9:00 AM class. Do they think my jokes are serious facts that they should learn? Maybe they thought "Bronze Age Orientation Day" was a serious factual video.

But I'm not sure what is supposed to be funny here. I am sure that narrow-minded quantitative methodologists in some fields would think it inappropriate to claim that an observational study could be considered a "quasi-experimental" method. For the limitations of the method of random controlled trials (the "true experimental method"), see:

Cartwright, Nancy, 2007, Are RCT's the Gold Standard? BioSocieties 2(1): 11-20)

One methods are constrained by one's type of data, one's sample, and one's goals. But I hesitate to go on here without knowing just what seems to be so funny.