A small grumpy post about scholarly citation "analytics"

December 09, 2019

Over the course of my career, I've increasingly noticed people caring about something called "impact factors" and "citation analytics."  This isn't so prevalent in my field, at least in part because the humanities in general still rely on subjective peer judgment and distrust quantitative measurements.  I agree with both of those things, for obscure philosophical reasons I won't get into here (while I value numbers and data, I think they can be misleading unless properly interpreted, and the process of interpretation still "bakes in" subjective judgment).  But I recognize we are perhaps a lagging field, and one day soon the metrics revolution will come for us.  So I think we need to prepare for that day now, when the sun still shines.  More or less.

This piece, from The Chronicle of Higher Education, talks about systemic efforts to "game" citation practices so as to improve particular scholars', and particular journals', "impact factors".  This is intrinsically depressing but also not very surprising to me.  It seems the kind of thing that human beings would do, to manipulate the system.

More pettily still, this is the first time I had ever heard of Marie E. McVeigh, who is head of editorial integrity at Clarivate Analytics, which publishes Journal Citation Reports.  McVeigh seems a decent person, but who elected her to judge all scholarship?  This seems to me a place where scholarly self-government would be useful, rather than leaving it to a for-profit corporation which is unaccountable to the consensus of actual experts.  

But back to real matters: Here's what I think.  Every effort to avoid, evade, bypass, or otherwise outwit subjective judgment is doomed to fail.  Pieces can be cited for a myriad of reasons. They can be also not cited for a myriad of reasons.  Futhermore, that a piece is cited is no guarantee of excellence.  Or even, perhaps, of impact--except, for example, as an example of what not to attempt in an article.  A primary appeal to metrics of "excellence" and "impact" are actually, it seems to me, evidence of insecurity about the capacity of experts in the field to judge the quality of work in that field.  (And note: "insecurity" is not the same thing as "healthy skepticism" or even "a reasonable self-critical attitude." Insecurity is restlessly, relentlessly paranoid about these things, in a way that a more sane stance is not.)  

Besides, when you think about it, citations are not an escape from subjectivity; they are simply a more impersonal form of subjectivity, aesthetic taste wearing a Guy Fawkes mask.  After all, they are themselves subjective decisions, as anyone who has composed a footnote well knows.  (Here I should insert a desultory shout-out to Anthony Grafton's fun little book on footnotes, so I hereby do.)  So the appeal to "metadata," such as references to a piece, instead of data, such as, you know, well, direct discussion (and even evaluation) of that piece by another author in another piece, only looks like it avoids subjectivity.