Sunday, October 12, 2014

How to make a weak argument

Suppose you are writing up some archaeological results. You will be making a bunch of arguments--statements that draw on data and theory to come to some conclusion of interest. Most works contain a number of arguments, often at different levels. For example you make claim that you found 41 pieces of obsidian in the lowest level and only 14 in the uppermost level. This is an argument, but it is not a particularly interesting one. You may later make a more interesting argument suggesting that the decline in obsidian was due to changing commercial routes that now avoided your site, or perhaps you will argue that the decline came from a reduction in blood-letting rituals that employed obsidian blades.

Now, suppose you decide that you want to make a weak argument that few of  your colleagues will find convincing. While I am of course being sarcastic here, as I was in my post, "How to give a bad conference paper", this is a serious point. Why? Because it often seems that archaeologists must at some level be making this decision. They make weak arguments. So, the purpose of this post is to help them out by reminding them of tips and tricks to make bad arguments. I will mostly provide links to past blog posts where I discuss these issues.

(1) Use analogy incorrectly.

Do not develop a formal argument by analogy, based on a sample of source examples and carefully extrapolated to your archaeological data. Ignore Lewis Binford's suggestion to treat an analogy as a hypothesis to test, and whatever you do, make sure you avoid Alison Wylie's brilliant and definitive discussion of the role of analogy in archaeology. Cherry-pick one analogical case from somewhere in the world and claim that it supports your case.

You could check out my previous post on this topic, although be warned that it is a reverse argument: it assumes that you might want to make strong arguments and use analogy well.

(2) Make post-hoc interpretations.

Don't bother to set up initial hypotheses or expectations. Who knows what you fill find when you dig into the ground, anyway? Do your fieldwork or lab analysis, then scratch your head and try to dream up a nice-sounding interpretation. Slap some currently fashionable idea onto your data, and voila, you are done.

You could check out my earlier post on post-hoc arguments, or the one on trying to prove that you are wrong.

(3) Use empty citations to back up your shoddy scholarship.

Don't use citations to other works to supply data and cases that provide a foundation for your arguments. Instead, cite sources that that have no empirical data, but rather offer opinions and speculations that agree with your argument. Avoid citing studies with data that go against your views or models; instead cite those that agree with your ideas but lack any data. These are called empty citations.

You can check my prior post on this topic, and please follow the links there to Ann-Wil Harzing's original discussion of empty citations. Oops, I am being straight here, not sarcastic.

I find it really depressing that the archaeological literature (particularly the archaeology of complex societies) is so full of weak arguments. This acts to prevent the development and accumulation of reliable archaeological findings, which impedes the empirical advancement of our field. The sloppy use of analogy, post-hoc interpretations, and empty citations are all part of the picture. We need to get our act together. If you have not read and carefully studied Wylie (1985), you should do that immediately. And then check out chapter 7 of Booth et al (2008). Check out some of the methodological works from the social scientists I cited in my previous post. And finally, read my article on this topic (once I get around to writing it.......)

Booth, Wayne C., Gergory G. Colomb and Joseph M. Williams  (2008)  The Craft of Research. 3rd ed. University of Chicago Press, Chicago.

Wylie, Alison  (1985)  The Reaction Against Analogy. Advances in Archaeological Method and Theory 8:63-111.


Anonymous said...

You say:

"Don't use citations to other works to supply data and cases that provide a foundation for your arguments."

Maybe in an ideal world, but in practice this is terrible advice. People don't pad their writing with citations just for the fun of it - there are strong, and usually irresistible, pressures on scholars that explain this phenomenon.

I can't think of a single paper I've submitted where there weren't comments along the lines of "why doesn't the author cite X". And it's almost never because I need data from those citations to support my arguments. It's because they are on the same topic as the paper I'm writing (quite a different thing).

In part this is because we are required to demonstrate scholarly expertise ("look how well read I am") to show we're real members of the club. But it's also personal - the citations the reviewers are telling you to put in are often their work or their friends' work.

Citations are currency in the modern academy. If you can say your paper has been cited 500 times rather than 5 times, how does that impact your chances for promotion or tenure? Or getting that new research center/lab you want from the administration? And it's only getting worse as things like Google Scholar make the quantification of citations more easy and near-automatic.

You're complaining about a consequence here, rather than a root cause. And that root cause is the fact that we live in a hyper-competitive academic system, where success in getting jobs, grants, publications all depend on others' failure. Frankly, and I don't mean to be combative here, but your complaint comes from the privileged position of someone with tenure, and who has already therefore already massed a lot of social capital in academia - and has security of employment.

For junior people, especially grad students, failure to cite excessively with offend the often prickly people reviewing their grant applications and article submissions. "They're writing on topic X. But, I've written about topic X too and they don't cite my work, how dare they..."

Maybe if academic's didn't have egos... but let's be honest here about how the system actually works.

Michael E. Smith said...

You are describing a (very real) phenomenon that is not the same as empty citations in Harzing's sense, although the two do overlap. Harzing was talking about citing sources without relevant data, as if they in fact contained relevant data. What you describe is padding the bibliography. This can be done to satisfy editors and reviewers WITHOUT engaging in empty citations.

You can cite some sources and say something like, "Authors X,Y and Z have also worked on this topic" or "The findings of Author X (who does have data and findings) are accepted by Author Y." If ten authors agree with a published finding, that fact is relevant to an assessment of the finding (compared to ten authors saying that is is BS). But you can indicate that fact without using wording that suggests that all ten authors provide independent data in support of the original finding.

Using citations strategically is a basic fact of our academic culture, as you point out. Every proposal-writing class talks about this. If you can figure out who is likely to review your proposal (or journal paper), make sure to cite these people. Many journals now explicitly suggest that authors cite papers from that journal (in order to boost citations and impact factors). One of my past three paper acceptances included such a suggestion, with a statement to the effect of, Have you cited relevant works published in this journal? That was asked at the time that revisions were asked for. In fact, I had cited one or two papers from that journal already, so I refrained from adding any more.

I just reviewed a ms for a journal, and I was insulted to be asked (in a long list of yes/no questions, most of which could not be answered with a simple yes or not answer) whether the author had cited relevant works from that journal. I tried to leave the answer blank, but then the system would not accept my review. This is getting out of hand.

You are correct that I do have a privileged position within academia. I have tenure at a good university and considerable "social capital in academia." But that does not at all exempt me from these same pressures and academic baloney. By being an active scholar not afraid to be critical of colleagues, I have made a number of enemies in my career. There are people out there who do what they can to shoot me down in anonymous reviews. Strong criticism is fine when deserved and I have a pretty thick skin. But I get some serious anonymous attacks on some very good papers and grants (to judge by the other reviews, or by the ultimate success of the paper or grant). Sometimes editors see through this and make the right decision (I am clearly biased here!), but other times it leads to rejections or revisions which delay the whole process.

If I knew who these people were, I could at least toss them a bone by citing them with some nice prose. But I am clueless. So in that sense, young scholars have an advantage. It should be easier to figure out who to cite for political reasons, and its better to have a reviewer who is cranky for not being cited sufficiently (sorry, that is just a fact of life, and it has been since I started publishing over 30 years ago) than to have a reviewer who is out to get you personally.