Thursday, July 7, 2011

Journal impact factors, scholars who don't cite the literature

In case your were wondering whether the journal Endeavor (a British journal on the history and philosophy of science) was improving its impact or not, I pass on this email I just got from Elsevier:

journal cover image
2009 Impact Factor
WAS
0.167
2010 Impact Factor

NOW
0.245
Listed by highest Impact Factor
Journal title
2010 Impact Factor
1.710
1.623
0.983
0.447
0.325
Wow, Endeavor is up from 0.167 to 0.245! That makes my lone contribution to the journal (a book review a number of years ago) much more visible and valuable! One thing that puzzles me, though, is why did they send me this particular list of journals? I can understand if Elsevier has in its database that I have published recently in the Jr. of Anth. Archy and Jr. Historical Geography. But how did they link me up with Endeavor, from one book review nine years ago? There are literally hundreds, perhaps thousands, of Michael Smiths out there publishing in academic journals. Are they looking at my website? Or perhaps the data are from ResearcherID? (I'm not sure if the book review is listed there or not).

In any case, this provides a link to my second topic, scholars who don't cite the literature. I was reading a review of a book by S.N.Eisenstadt by Charles Tilly, who took the author to task for leaving out large bodies of relevant literature (the review is republished in Tilly's excelent book, Explaining Social Processes). This reminded me of my contribution to the journal Endeavor, a review of Ross Hassig's book, Time, History and Belief in Aztec and Colonial Mexico. Hassig has some interesting ideas about how Aztec calendars were linked to political processes, and I find his materialist explanation of various calendrical issues congenial.

However, Hassig's methods leave something to be desired. He analyzes the primary sources well, evaluates them against one another, and reaches his conclusions, which seem reasonable on the basis of his argument. However, he completely ignores the secondary literature on those topics. For example, from his discussion of whether the Aztecs added a day to their annual calendar every four years (leap years) to account for the length of the solar year, one would never know that this is a contentious issue, with much debate and many publications by specialists. And since much of this literature draws on sources and concepts and methods not covered by Hassig, his discussion is incomplete and out of step with the literature. Some of his conclusions match consensus views, and some are out in left field, but a non-specialist reader (perhaps someone interested in Babylonian calendars) would never know which is which.

So, who should one pay attention to? Ross Hassig (generally a very good scholar), or the larger body of specialist literature that he does not cite? In science, the results of research are judged by a community of scholars (Calleigh 2000; Harris 1979:chapter 1). This is an amorphous group of individuals, who express themselves in publications, peer reviewing, lectures, emails, conversations over beers, and (increasingly), blogs. Hassig's work would be more convincing if he could show that his conclusions are not contradicted by the findings of other scholars. I don't claim to understand the details of Aztec or Mesoamerican calendrics (a ridiculously complicated topic), but I do trust the community of scholars more than I trust a single author laboring in isolation. And Hassig's contributions to the community of scholars would be greater if he engaged with the research of the rest of the specialists (e.g., Anthony Aveni, Hanns Prem, Edward Calnek, Michel Graulich, Rafael Tena).

Yes, scholars often work alone. But our research is part of broader disciplinary and transdisciplinary contexts, and it is judged not by absolute standards but the the relevant communities of scholars. We should all cite the literature, and frown on those who choose not to.

Calleigh, Addeane S.
2000    Community of Scholars. Academic Medicine 75(9):912.


Harris, Marvin
1979    Cultural Materialism: The Struggle of a Science of Cultures. Random House, New York.

7 comments:

Chris said...

"We should all cite the literature, and frown on those who choose not to."

Amen.

I had two articles this past year, both in major journals (one of which is arguably the most prestigious anthro journal in the western world), and neither of my articles got recognized in AA's recent "what happened in archy in 2010" piece.

Hahaha...sigh... Frustrating, but I guess this is not really what you are referring to in your posting. T

here is so much literature out there and so many journals that it is understandably challenging to stay up on things. But I suppose that is why there is community of researchers. I find one of the most rewarding aspects of the peer review process is opening my eyes to studies I overlooked.

Too bad about the Hassig book. His Trade, Tribute, and Transportation, however, will always occupy a special place on my shelf.

Michael E. Smith said...

@Chris- I have the review article you mention in my stack of papers to read, so I can't comment on it yet. I was quite upset at the first installment of that series of annual archaeology reviews, in AA, in 2009: See:

http://publishingarchaeology.blogspot.com/2009/08/americanist-archaeology-is-slighted-in.html

But I will let you in on one of the dirty little secrets of our field: People don't cite recent literature very much. One of my biggest disappointments about my career is the lack of feedback (citation, discussion, whatever) on most things I publish. Maybe my work is boring and insignificant and I am being overly egotistical. Or maybe archaeologists just don't cite or discuss current work very much. So don't feel bad (or, go ahead and feel bad...), but your situation is probably quite typical.

I wonder if someone has worked on quantifying the recent citations issue, comparing disciplines. Here is one indication. My h-index is 9 (I have nine publications that have been cited nine or more times, in the Thomsen database). I was reading something by a biologist on citation metrics, and he suggested that a mid to late career scholar, engaged in active research and publication, should have an h-index of 30 or 40. I think I publish at least as frequently as other archaeologists at my stage.

Some people came out with a way of using Google Scholar to generate h-indexes and other metrics, and I played around with it. The only archaeologists I could find up in the 40 range (for h-index) were people like Renfrew, Hodder, and Flannery. My index came up over 100 on their scale, proving that either I am far more influential than those guys, or else work by other "Michael Smiths" was being included in my metrics.

Anonymous said...

I notice that reviewers to articles seem to treat the extent of an author's citations based on the status of the author, often wearing kid gloves for more influential folks. How anonymous are articles when they go out for review? I know Am Anth does blind reviews. Does Am Ant and Lat Am Ant do the same, or can reviewers see the authors' names? Would seem to bias the reviews, no?

Anonymous said...

@Chris: Sometimes people systematically and deliberately do not cite others' works simply to disregard them for competitive reasons. It is inexplicable but common. My dissertation research involved a method and form of data that only one other person was studying at the time. This person refused and still refuses to cite my work despite many publications. Unfortunately, I have had many similar encounters over the years. Best not to lower oneself to such a level and always strive for professionalism.

DB

Scott said...

Sorry for the late comment, I came across this post through a google search. How was the experience of publishing archaeological material in Journal of Historical Geography? I'm considering submitting there and was curious for your opinion of the experience. Was the review process fairly efficient? Thanks for any input you can give!

Michael E. Smith said...

@Scott - As I recall, the review process was efficient and not too slow, but my co-author was the one dealing with the journal. They sent it out anonymously for review, and one reviewer (who is a good friend, but unaware I was an author) complained that the paper didn't cite Smith enough!

Scott said...

Thanks, this is helpful!