I am a co-editor of a “new” journal Surveys in Operations Research and Management Science published by Elsevier. I’ll write more about that journal and my thoughts about it in another post. I expect to be blasted by some people whose opinions I value about teaming up with a commercial publisher, but I did have my reasons!
I spent time this past weekend in Phoenix at an Elsevier editors conference where there were about 70 editors from a wide variety of fields (lots of medicine and chemistry). During the weekend, there were a number of presentations on things like blogging and electronic paper handling and so on. One session I enjoyed very much was about bibliometrics: measures to determine the impact of a journal. I had kinda known some of this before, but it was interesting to get a synopsis of how these things work.
The standard “impact factor” comes from the different companies that have owned the Science Citation Index and now the ISI Web of Science (Thomson Reuters). Briefly, if you want to calculate the impact factor (IF) of a journal in 2005, you look at the 2005 articles from that journal, add up all the references to those articles published in any ISI journal in 2006 and 2007 (no later), and divide by the number of articles in 2005. There are lots of details to argue over: what is an “article”? What is a “reference”? What journals should be ISI? and so on.
Even the basic structure certainly gives one pause in determining impact. This definition of impact means that all the wonderful citations I get for my old papers on voting in Social Choice and Welfare, a set of papers from the late 80s that are currently in vogue, are never measured for the impact of that journal for any year: they fall outside a two year window. For some very fast moving fields (say, genetics), this two year window might be appropriate. But for others, including operations research I would say, this window seems to measure the wrong things, ignoring the peak for many papers.
Further, there are lots of ways to manipulate this value (I will point out that the Elsevier presenter explicitly stated that journals should not do anything specifically to manipulate any impact factor). I have heard of journals that, upon accepting a paper, provide authors with a list of reference suggestions from that journal within the two year window. “No pressure, mate, but you might consider these references… helps us out a lot, you know!” Pretty slimy in my view, but it is done.
What I found most interesting is that there are other measures of impact, some of which seem to be gaining traction. The most intriguing is a measure that uses the same eigenfactor approach that Google uses in its PageRank. Imagine journals as a network, with edges giving the number of times an article in a journal references the other journal. This gives an influence diagram, and eigenvalues give (in a well defined way) the importance of a node relative to number of references.
It is certainly not clear that number of references is a good proxy for influence, and not every reference is the same. Consider “In the fundamental work of [1], disproving the absurd argument of [2], which built on [3,4,5,6,7,8]”: all those articles are referred to once, but I know which one I would like as my article. But, if you are going to base a measure on counts of references, I would certainly trust an eigenvalue-based approach over a pure counting approach.
The approach, outlined in detail at eigenfactor.com, has the further advantages that it uses a five year window and it ignores journal-level self-citations. The five-year window gives more time for citations to count towards a paper, without giving a huge advantage to older journals. Ignoring self-citations gets rid of the easiest method for a journal editor manipulation. So I like it!
The site eigenfactor.com lets you look at journal eigenfactor and per article influence rankings. There are a couple of different classifications of journals, so let’s look at JCR’s “Operations Research and Management Science” list. The 2007 per article rankings are:
- Management Science
- Mathematical Programming
- Operations Research
- Mathematics of OR
- Transportation Science
Eigenfactor scores (which measures the overall impact of the journal) moves things around a bit:
- European Journal of Operational Research
- Management Science
- Mathematical Programming
- Operations Research
- Systems and Control Letters
EJOR is on top since the journal has a good per article impact factor and publishes lots of articles.
INFORMS Journals do pretty well with 4 of the top 5 in the first list and 3 out of five of the second.
What is really neat is to look at the cost to get those eigenfactor values. It would cost $93,408 to subscribe to the 58 journals (this is the individual journal costs: undoubtedly the large publishers bundle their subscriptions, as does INFORMS). Paying the $656 (in 2007) for Management Science is 0.7% of that cost but gets you more than 10% of the total eigenfactor in this field. Subscribing to the top 11 journals in this ranking would cost $5723 (and get you 7 INFORMS journals) and get you more than 1/3 of the total eigenfactor. Adding the 12th would get you European Journal of Operational Research but at $5298 would practically double your cost while increasing your total eigenfactor amount from 37.8% to just 49.4%. Other amazing prices are with Engineering Optimization that costs $4338 for much less than 1% of the field’s eigenfactor and International Journal of Production Research which costs $7684, albeit for 8% of the total eigenfactor.
Now, there are lots of caveats here. Most importantly, while reference numbers are a proxy for impact, they are not equivalent. If you have a paper that applies operations research to a real problem, publishing in Interfaces might have the most impact, even if the journal is ranked 21st by eigenfactor. And when it comes to costs, I am not sure anyone really pays “list price” in this day of aggregation (and prices for individuals are much lower for many journals).
When you are arguing with your librarian on which journals to cut (or, more rarely, add), you might want to look at some of this data. And might I suggest the full suite of INFORMS journals? At $99 for an individual for online access (and under $5000 for institutions), this should give you the recommended daily allowance of eigenfactors at a very affordable price. Makes a great stocking stuffer at Christmas!
Don’t mention Elsevier anywhere near our head librarian; the perception around here is that they use a pricing scheme the Mafia would be embarrassed to try.
I agree with you that using a two year window for impact factors in most areas seems artificially tight. There’s another thing I worry about with impact factors, and that’s lack of normalization with respect to discipline standards for citations. I’m in a department with org behavior and strategy people, and the citation list on one of their papers flirts with the length of the entire text of one of mine. As long as you’re comparing journals within the same discipline, this is a wash, but we look at citation counts when judging individuals, and I think some of my colleagues look at impact factors when judging what are “A journals” in disciplines with which they are unfamiliar. (I’m having an on-going argument with my department chair about whether OR is an A journal in operations research.)