The Dangers of Preprint Servers

Now that I have moved (at least partially!) into academic administration, my colleagues ask for advice on publishing strategy.  A situation has occurred with one of my colleagues that has made me question my understanding of precedence of research results.  I’d love some feedback to help me understand what went wrong here.

My colleague, call him R1, proved a couple theorems in a fast-moving subfield of optimization.  He wrote up the results and on March 1 submitted the paper to The Slow but Prestigious Journal of Optimization, which I will call SJ (the characters get confusing, so the inset Cast of Characters may help).  He also posted the paper on the well-known eprint servers Optimization Online  and ArXiv (OO/A).  The paper began its slow and arduous thorough refereeing at SJ.blog_post

On August 1, R1 received a perky email from researcher R2 with a paper attached saying “Thought you might be interested!”.  The paper contains a subset of R1’s results with no reference to R1’s work.  This is not a preprint however, but an “article in advance” for a paper  published in Quick and Fast Journal of Optimization, QJ.  QJ is a journal known for its fast turn-around time.  The submission date of R2’s work to QJ is March 15 (i.e. two weeks after R1 posted on OO/A and submitted to SJ).

R1 lets R2 know of his paper, pointing to OO/A.  R1 never hears anything more from R2.

R1 contacts the editors of QJ suggesting some effort be made to correct the literature with regard to the precedence of this work.  QJ declines to change R2’s paper since it has already been published, and the large commercial publisher (LCP) does not allow changes to published articles (and, besides, R2 won’t agree to it).

OK, what about publishing a precedence acknowledgement in the form of a letter to the editor?  I find this somewhat less than satisfying since the letter to the editor is separate from the paper and no one reads journals as “issues” anymore.  But at least QJ would be attempting to correct this mess.  And here is where I get both confused and outraged.  The editor’s response is:

Also, during consultations with [LCP]’s office, it became clear that LCP does not approve of publishing a precedence acknowledgement towards a paper in public domain (preprint server). I hope you would agree that the fact that a paper is posted on a preprint server does not guarantee its content is valuable or even correct – such (partial) assurances can be obtained only during peer-review process.

Hold on, what?  QJ and LCP are saying that they will ignore anything that is not in a peer-reviewed journal!  R2 does not have to say anything about R1’s result since it has not been refereed.  Further, unless R1 gets the paper published in SJ with the March 1 submission date, QJ will not publish a precedence acknowledgement.  If the paper gets rejected by SJ and my colleague then publishes in Second Tier Journal on Optimization, clearly the submission date there will be after QJs date so R2 takes precedence.  If the paper doesn’t get published, then R2 and QJ will simply act as if R1 and OO/A do not exist.

I find this situation outrageous.  I thought the point of things like OO/A are to let people know of known results before journals like SJ finish their considered process of stamping their imprimatur on papers.  If the results are wrong, then following authors at least have to point out the flaws sometime during the process.

Now I don’t know if R2 saw R1’s paper at OO/A.  But if he did, the R1’s posting at OO/A at least warned him that he better get his paper submitted.  Of course, R1’s paper might have helped R2 get over some roadblocks in R2’s proof or otherwise aid him in finishing (or even starting, though there are no overt signs of plagiarism) his paper.  But it seems clear there was absolutely no advantage for R1 to post on OO/A, and clear disadvantages to doing so.  R1 would have been much better served to keep his results hidden until acceptance at SJ or elsewhere.

This all seems wrong.  R1 put out the result to the public first.  How did R1 lose out on precedence here?   What advice should I be giving colleagues about this?  Here is what I seemed to have learned:

  1. If you don’t have any ideas for a paper, it is a good idea to monitor OO/A for results.  If you find one, quickly write it up in your own words and submit it to QJ (but don’t post on OO/A).  If you get lucky and the referees miss OO/A (or follow LCP’s rule and ignore anything not in the refereed literature), then you win!
  2. Conversely, if you have a result, for God’s sake, don’t tell anyone.  Ideally, send it to QJ who can get things out fast.  If you must, submit it to SJ but don’t post the preprint, present it at INFORMS, or talk about it in your sleep.

This all seems perverse.  How should I think about this?  Has anyone faced something similar?  Does anyone see a satisfactory resolution to this situation?  And, for those on editorial boards, does your journal have policies similar or different than that of LCP? Is this ever discussed within journal boards?  Is all this a well-known risk?

 

Referees considered harmful

When doing empirical work, researchers often mess up either in the design of the experiment or in the analysis of data.  In operations research, much of our “empirical work” is in computational testing of algorithms.  Is algorithm A faster than algorithm B?  “It depends” is generally the only honest answer.  It depends on the instance selection, it depends on the computing environment, it depends on the settings, etc. etc.   If we are careful enough, we can say things that are (within the limits of the disclaimers) true.   But even a a careful experiment can fall prey to issues.  For instance, throwing away “easy” instances can bias the results against whatever algorithm is used to determine easiness.  And don’t get me started on empirical approaches that test dozens of possibilities and miraculously find something “statistically significant”, to be duly marked with an asterisk in the table of results.    It is very difficult to truly do trustworthy empirical work.  And it is even harder to do such work when researchers cheat or reviewers don’t do their job.

For some fields, these issues are even more critical.  Operations research generally has some theoretical grounding:  we know about polytopes and complexity, and so on, and can prove theorems that help guide our empirical work.  In fields like Social Psychology (the study of people in their interactions with others), practically all that is known is due to the results of experiments.   The fundamental structure in this field is a mental state, something that can only be imprecisely observed.

Social psychology is in a bit of a crisis.  In a very real sense, the field no longer knows what is true.  Some of that crisis is due to academic malfeasance, particularly that of an influential researcher Diederik Stapel.  Stapel has been found inventing data for dozens of papers, as described by a  “Science Insider” column.

Due to data fraud by Stapel and others, the field has to reexamine much of what it thought was true.  Are meat eaters more selfish than vegetarians?  We thought so for a while, but now we don’t know.  A Dutch report on this goes into great detail on this affair.

But overt fraud is not the only issue, as outlined in the report.  I was particularly struck by the role paper reviewers played in this deceit:

It is almost inconceivable that co-authors who analysed the data intensively, or reviewers of the international “leading journals”, who are deemed to be experts in their field, could have failed to see that a reported experiment would have been almost infeasible in practice, did not notice the reporting of impossible statistical results, … and did not spot values identical to many decimal places in entire series of means in the published tables. Virtually nothing of all the impossibilities, peculiarities and sloppiness mentioned in this report was observed by all these local, national and international members of the field, and no suspicion of fraud whatsoever arose.

And the role of reviewers goes beyond that of negligence:

Reviewers have also requested that not all executed analyses be reported, for example by simply leaving unmentioned any conditions for which no effects had been found, although effects were originally expected. Sometimes reviewers insisted on retrospective pilot studies, which were then reported as having been performed in advance. In this way the experiments and choices of items are justified with the benefit of hindsight.

Not infrequently reviews were strongly in favour of telling an interesting, elegant, concise and compelling story, possibly at the expense of the necessary scientific diligence.

I think it is safe to say that these issues are not unique to social psychology.  I think that I too have, as a reviewer, pushed toward telling an interesting story, although I hope not at the expense of scientific diligence.   And perhaps I could have worked harder to replicate some results during the reviewing process.

I don’t think we in operations research are in crisis over empirical issues.  I am pretty confident that CPLEX 12.4 is faster than CPLEX 4.0 for practically any instance you can throw at it.  And some journals, like  Mathematical Programming Computation have attempted to seriously address these issues.  But I am also pretty sure that some things I think are true are not true, either due to fraud by the author or negligence by reviewers.

One important role of a reviewer is to be on the lookout for malfeasance or bias and to avoid allowing (or, worse, forcing) authors to present data in an untruthful way.  And I think many of us are not doing a great job in this regard.  I would hate to have to rethink the foundations of my field due to these issues.

Become an Operations Research Editor!

While the popular conception of a university professor is someone who stares at arcane notation on a whiteboard until interrupted by the need to teach pesky undergraduates, there are many more activities that are part of the professorial portfolio.  We drink coffee with colleagues, gossip about departmental politics, attend conferences in far-flung locales, referee papers, train doctoral students, write blog entries, tweet, volunteer for professional societies, and much more.  There are a ton of things that can go into a professional life.

One key professional role is that of editor of a professional journal.  Editing a journal is not a job to take on lightly.  It requires a 3-5 year commitment and that commitment is contuous.  Except for editing the “big journals” like Management Science or Operations Research, an editorship is not terrifically time consuming, requiring just a few hours per week.  But it requires those hours each and every week:  nothing will kill a journal quicker than an on-and-off editor who only responds when crises have grown so large as to not be ignored.

In return for that time, the editor can have a unique and personal effect on a journal.  The editor’s judgement will determine the quality of the journal, and the editor’s energy will define the scope and creativity in the journal.

There are two journals that are looking for editors for which this scope and creativity issue is particularly important:

  1. INFORMS Journal on Education.  ITE is an online-only journal of INFORMS with a goal of advancing education in OR/MS.  The journal has published pieces on educational theory, case studies, surveys, curricula, and much more.  I have found the journal to be very useful as I prepare my classes, and I have published in it.  This would be a great post for a creative researcher with a passion for educational issues.  Nominations are due June 30, 2012
  2. Surveys in Operations Research and Management Science.  I am even closer to this journal, since I am one of the thee (along with Jan Karel Lenstra and Bert Zwart) co-editors.  This journal was designed as the followup to the well-regarded Handbooks in ORMS that Jan Karel Lenstra and George Nemhauser handled for a decade or so.  The idea was to publish high quality surveys (like in Handbooks) without the lead time required by the Handbooks.  Like many new journals, it has been a real task to get off the ground, but we will have published three years worth of journals at the changeover.  This journal needs a highly-energetic, well-connected editor who can give it near undivided attention over the next few years to put the journal on solid footing.  It is an Elsevier journal which gives it some disadvantages (some choose not to work with commercial publishers) and advantages (editorial support is very, very good).  I’ve greatly enjoyed working with Jan Karel and Bert and the rest of the team on this, but it needs an individual or group which is less scattered in their interests than I am at this point.  Applications are due July 31, 2012.

Taking on a journal is a big responsibility, but it can be very rewarding. Short of doing Lanchester Prize level work, it is one of the best opportunities you have to have a real effect on the field.

A New ISI Operations Research Journal

I have mixed feelings about things like journal impact studies.  Once a ranking is announced, forces come in to play to game the ranking.  For journals, I have seen things like “helpful suggestions” from the editor on references that should be added before the paper can be accepted (“Perfectly up to you, of course:  let me see the result before I make my final decision”).    Different fields have different rates, making it difficult to evaluate journals in unfamiliar fields.  Overall, I don’t know what to make out of these numbers.

I think I am particularly annoyed about these rankings since my most cited paper (according to Google) doesn’t even exist, according to “Web of Knowledge“, the current face of what I knew as the Science Citation Index.  According to “Web of Knowledge”, my most cited papers are “Voting schemes for which it can be difficult to tell who won the election”, and “Scheduling a major college basketball conference”.  If you go to Google Scholar or, better yet, use Publish or Perish to provide an interface into Scholar, my most cited works are the volume I did with David Johnson on the DIMACS Challenge on Cliques, Coloring, and Satisfiability and “A column generation approach for graph coloring” (with Anuj Mehrotra).  “Voting Schemes…” and “Major College Basketball…” come in third and fifth.  Now I understand that the volume is difficult to work with.  Editors of refereed volumes don’t often do much research in putting together the volume, though I would argue that this volume is different.  But where is “Column generation approach…” in Web of Knowledge?  How can my most referred-to (and certainly one of my better) papers not exist there?

It turns out that in 1996, when “Column generation was published”, INFORMS Journal of Computing, where it was published, had not been accepted by ISI, so, according to it and its successors, INFORMS Journal of Computing, Volume 8, does not exist (indexing seems to have started in volume 11).  Normally, this wouldn’t matter much, but we do keep track of “most cited” papers by the faculty here, and it hurts that this paper is not included.  And including it would increase my Web of Knowledge h-index by one (not that I obsessively check that value more than a dozen times in a year and wonder when someone is going to cite the papers that just need one or two more cites in order to ….., sorry, where was I?).

This is a long way of saying that while I am not sure of the relevance of journal rankings and ISI acceptance, I certainly understand its importance.  So it is great when an operations journal I am involved in, International Transactions in Operational Research, gets accepted into ISI.  ITOR has done a great job in the last few years in transitioning into being a good journal in our field.   The editor, Celso Ribeiro, has worked very hard on the journal during his editorship (I chaired the committee that chose Celso, so I can take some pride in his accomplishments).  ITOR is a journal from the International Federation of Operational Research Societies (IFORS), so this is good news for them too.  Some schools only count journals with ISI designation.  ITOR gives a new outlet for faculty in those schools.

Congratulations ITOR and Celso!

P versus NP and the Research Process

By now, everyone in computer science and operations research is aware of the purported P<>NP proof of Vinay Deolalikar of HP Labs.  After intense discussion (mainly through blogs and wikis), the original paper was taken down, and Vinay has prepared a new version for submission.  He claims:

I have fixed all the issues that were raised about the preliminary version in a revised manuscript (126 pages); clarified some concepts; and obtained simpler proofs of several claims. This revised manuscript has been sent to a small number of researchers.  I will send the manuscript to journal review this week. Once I hear back from the journal as part of due process, I will put up the final version on this website.

I am convinced by Dick Lipton’s blog entry and by Scott Aaronson’s commentary suggesting fundamental flaws in the paper, but since Vinay has not retracted the paper, I will look forward to the definitive version of the paper.  For a detailed description of the issues, press coverage, and other aspects, polymath has an extensive wiki on the topic.

What I find most intriguing is the field’s response to this claimed proof.  Proofs that P=NP or P<>NP are certainly not uncommon.  Gerhard Woeginger faithfully keeps track of these claims and is up to 62 in his list.  Some of these results come out of the operations research world.  For instance, Moustapha Diaby is a faculty member at the business school at the University of Connecticut and believes he has found linear programming formulations for NP-hard problems (number 17 on Woeginger’s list).

The Deolalikar paper is unique, however, in that many, many top mathematicians looked very closely at the paper and worked very hard to determine its correctness.  This group includes people like Fields-medal winner Terrence Tao and Polya and Fulkerson prize winner Gil Kalai.  Why did so many very smart people (and successful!  They don’t do wikipedia pages on just anyone [yet]) spend time on this while practically no one spends time with the other 61 purported proofs?

The most obvious reason is that this paper presented a fundamentally new approach to this problem.  As Lipton says: “the author has advanced serious and refreshingly new ideas of definite value”.  In this proof, Deolalikar uses finite model theory, an area of logic, to deduce structures in random satisfiability problems.  If P=NP, then the structures would have to be different than what is already known about random satisfiability problems in the critical region (this synopsis is vague to the point of not being usable). This is definitely a different direction than past efforts, bringing together a number of disparate fields.

Further, Deolalikar knows very well that there are a number of proofs in the literature that say “P<> NP cannot be proved this way”.  For instance, any result that cannot distinguish between the easy 2-satisfiability and the hard 3-satisfiability is doomed to failure (Aaronson’s blog entry gives a few others along with other signs to check for in claimed proofs). Deolalikar presented reasons for believing that his approach evaded the barriers.    This leads to excitement!  Could this approach avoid known invalid approaches?

Contrast this with papers that suggest that a linear programming model can correctly formulate an NP-complete problem.  Yannakakis showed that no symmetric formulation can do so, and that provides a powerful barrier to linear programming formulations.  Not only must a a formulation not be symmetric, but its asymmetry must be of the sort that continues to evade Yannakakis’ proof.  Without a strong argument on why an LP formulation fundamentally avoids Yannakakis’ argument, it is not even worthwhile spending time with LP formulations of  NP-complete problems.

This overwhelming doubt was clearly not felt by some referees who allowed Diaby’s papers to be published in “International Journal of Operational Research”, lending credence to the idea that the refereeing and journal process in operations research is broken (in my view, of course).  For the steps given in the Computational Complexity blog, we have have to add a step: “Your paper is accepted by a third tier journal and still no one believes it.”  Similarly, it will not be enough for me to see that Deolalikar’s paper is published:  at this point I trust the blogs (some of them, anyway) more than the journals!

Even if the Deolalikar result is shown not to be valid, the paper gives me enormous hope that someday the P<>NP (as I believe it is, rather than P=NP) will be proved.  We appear to be developing methods that will have some traction in the area.



Culling Journals Time!

It is that time of the year when our librarian asks us to consider whether or not to continue subscribing to journals.  In the past, journals have been identified by “percentage increase” with the idea that those whose increase is high need special attention to determine if they are still valuable.  This assumes that we had made good decisions in the past:  if a “bad” journal keeps its increase low enough, it doesn’t show up on the radar screen.  A low priced, but valuable journal with a “big” one-time increase gets special scrutiny.  But which should get more attention: a journal going up $60 on a base of $600 or an equivalent quality journal going up $200 on a base of $5000?  Ordering by percentage increase means the first gets much more attention but rational budgeting suggests looking carefully at the second.  While those values seem extreme, that is roughly what happens when comparing Management Science (as the “inexpensive” journal) and European Journal of Operational Research (whose price to Carnegie Mellon is $5885 per year).

This year, our librarian simply listed all journals above $500 and asked us to look those over.  Here are the ones in operations research/operations management we are considering:

INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH $8,615
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH $5,855
JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY $1,840
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS $1,166
ZEITSCHRIFT FUR OPERATIONS RESEARCH aka Mathematical Methods of Operations Research $898
OPERATIONS RESEARCH LETTERS $815
JOURNAL OF OPERATIONS MANAGEMENT $637

The INFORMS journals don’t make the list since the bundled rate puts them under $500/journal.

What to do with these?  Fortunately, I have already done some checking on journal influence and pricing.

Let’s start with the first journal listed above: “International Journal of Production Research” (Taylor and Francis). If we take a “cost per eigenfactor” value, that journal ranks 39th in the “Operations Research” ranking.  I have never considered publishing in the journal, so I don’t know much about it.  It does publish a lot of articles (24 issues per year, with around 15-20 articles per issue).  I recognize a couple of names on its editorial board.  Harzing’s indispensable Publish or Perish shows that it has a fair number of papers published with 100+ google scholar cites.  Overall, not a bad or junk journal, but for $8615, I would want much more.  So I would be biased towards dropping, but will bow to my colleagues in operations management on how they feel.  Near as I can tell, none of them have published in the journal either, so that might be a good one to cut.

European Journal of Operational Research (Elsevier) is a difficult one for me.  I have published two papers there recently, and it is a key outlet in operations research.  Since they publish many papers (24 issues/year times perhaps 25  papers per issue), the journal is important to our field.  Going through the same steps as above, the journal is 12th in cost per eigenfactor, number 1 in overall eigenfactor,I certainly know and admire much of the editorial board, there are many papers above 100 cites.  I’m not crazy about a $5855 cost, but I think we are stuck paying it.

Journal of the Operational Research Society (Palgrave) would be a hard one for me to cut.  Sometimes it veers off in directions I am not crazy about (the dreaded “soft OR” versus “hard OR” debate) but it offers a nice mix of theory and application, along with the odd interesting historical piece.  Number 15 on the cost/eigenfactor, I think it is safe.

Computational Optimization and Applications (Springer) is a journal I have published once in, and is part of the discussion when trying to place some of my work.  It is down the list at 25th in cost/eigenvector, but has an admirable board.  Not a huge number of papers with 100+ cites (14 in my search), but pretty reasonable.  I think it is OK.

Zeitschrift fuer Operations Research (Mathematical Methods of Operations Research) (Springer) has a long history, going back to the days when it made sense to talk about a country’s operations research journals.  But, like “operations research” groups in Fortune 500 companies, country-oriented OR journals are finding it hard to compete.  In fact, I am finding it hard to parse out exactly what the history is here, but this appears to be the combination of a couple different journals.  In any case, at 28 in the cost/eigenvector listing, it is clear it needs more papers like “Modeling of Extremal Events in Insurance and Finance” by Embrechts and Schmidli (1994) (at an impressive 2051 google cites) if it is going to survive.  So keep for now, but give it a stern eye.

OR Letters (Elsevier) is a natural keep at number 9 in the cost/eigenvector listing.  It is as close as we get to the rapid publication system that works so well in portions of computer science through their competitive conference publication system.

Journal of Operations Management (Elsevier) at number 10 in the cost/eigenvector listing, and a Financial Times journal to boot (meaning it is used to gauge research impact in the Financial Times ranking of business schools) is also a natural keep.

OK, there you have it:  I would toss out the International Journal of Production Research on the basis of stupid pricing but keep the rest.   We’ll see if my colleagues vote to keep it around for another year.

Authorship Order

Michael Mitzenmacher, in his excellent blog, My Biased Coin, has recent entries (here, here and here) on the order of authors on joint papers. When you have a last name that begins “Tri…”, it becomes pretty clear early on that alphabetical order is not going to result in a lot of “first author” papers. And it does tick me off when my work in voting theory becomes “Bartholdi et al.” or my work on the Traveling Tournament Problem is “Easton et al.”. I have even seen “Easton, Nemhauser, et al.” which is really hitting below the belt (since it is Easton, Nemhauser, and Trick).

Despite that, all of my papers have gone with alphabetical order, and I am glad I (and my coauthors) went that route. If even once I had gone with “order of contribution”, all of my papers would have been tainted with the thought “Trick is listed third: I guess he didn’t do as much as the others”.

The issue of determining “order of contribution” is a thorny one. There tend to be many skills that go into a paper, and we know from social choice how difficult it is to aggregate multiple orders into a single ordering. Different weighting of the skills leads to different orderings, and there is no clear way to choose the weighting of the skills. Even with the weighting, determining the ordering of any particular aspect of the paper is often not obvious. When doing a computational test, does “running the code” and “tabulating the results” mean more than “designing the experiment” or “determining the instances”? I don’t think hours spent is a particularly good measure (“Hey, I can be more inefficient than you!”) but there is practically nothing else that can be objectively measured.

Further, most papers rely on the mix of skills in order to be publishable. This reminds me of an activity I undertook when I was eight or so. I had a sheet of paper and I went around surveying anyone around on what was more important: “the brain, the heart, or the lungs” (anyone with a five-year-old kid will recognize a real-life version of “Sid the Science Kid” and, yes, I was a very annoying kid, thanks for asking). My father spent time explaining to me the importance of systems, and how there is no “most important” in any system that relies on the others. I would like to say that this early lesson in “systems” inspired me to make operations research my field of study, but I believe I actually browbeat him until he gave up and said “gall bladder” in order to get rid of me. But the lesson did stay with me (thanks, Dad!), and perhaps I was more careful about thinking about systems after that.

Some of the arguments over order strike me as “heart versus lungs” issues: neither can survive without the other. So, if a person has done enough work that the paper would not have survived without them, that both makes them a coauthor, and entitles them to their place in alphabetical order.

As for the unfairness of having a last name beginning “Tri…”, perhaps we should talk to my recent coauthors: Yildiz, Yunes, and Zin.

Reading Material While Snowed In

We had a record (21 inch) snowfall on Friday night, if you consider the 4th biggest snowfall of all time (since the 1860s) a record.  Since then, our city seems to be trying to turn this into our own little Katrina, showing very little planning or execution in getting the city back in working order.  City schools are closed and our street has yet to see a plow.  Once a car is painfully extracted from its snow cocoon, a curious Pittsburgh rite begins:  the placement of the kitchen chair.  Since the city is unable to actually remove any snow (it only pushes it around a bit), no on-street parking spaces are cleared except laboriously by hand.  Since it would be manifestly unfair for someone else to use the vacated spot, a kitchen chair is the accepted marker for “If you take this spot, I will curse you and your children and let the air out of your tires”.  Coincidentally,  I have my property tax check waiting to go in the mail.  What exactly am I getting for this high charge?

Anyhow, enough of the rant.  Being snowed in (for three days and counting, and furthermore…. OK, …calm) allows me to read my favorite issue of my favorite journal.  The January-February 2010 Interfaces is now available, and we all know what that means:  the Edelman Papers!  The Edelman, of course, is INFORMS big prize for the practice of operations research.  Every year, a few dozen nominees get whittled down to a half dozen finalists.  These finalists then prepare a fancy presentation, ideally involving a Cxx for suitably impressive xx.  They also put together a paper describing their work.  This is then published in the January-February of Interfaces.

I was a judge in the last competition, so I know the work of the finalists very well.  But it is inspiring to read the final versions of their papers.  I have a course on the applications of operations research that I teach to our MBAs and Edelman papers are generally a highlight of their readings.

In the 2009 competition, the finalists were:

CSX Railway Uses OR to Cash In on Optimized Equipment Distribution
Michael F. Gorman, Dharma Acharya, David Sellers

HP Transforms Product Portfolio Management with Operations Research
Dirk Beyer, Ann Brecht, Brian Cargille, Russ Chadinha, Kathy Chou, Gavin DeNyse, Qi Feng, Cookie Pad, Julie Ward, Bin Zhang, Shailendra Jain, Chris Fry, Thomas Olavson, Holger Mishal, Jason Amaral, Sesh Raj, Kurt Sunderbruch, Robert Tarjan, Krishna Venkatraman, Joseph Woods, Jing Zhou

Operations Research Improves Sales Force Productivity at IBM
Rick Lawrence, Claudia Perlich, Saharon Rosset, Ildar Khabibrakhmanov, Shilpa Mahatma, Sholom Weiss, Matt Callahan, Matt Collins, Alexey Ershov, Shiva Kumar

Marriott International Increases Revenue by Implementing a Group Pricing Optimizer
Sharon Hormby, Julia Morrison, Prashant Dave, Michele Meyers, Tim Tenca

Norske Skog Improves Global Profitability Using Operations Research
Graeme Everett, Andy Philpott, Kjetil Vatn, Rune Gjessing

Zara Uses Operations Research to Reengineer Its Global Distribution Process
Felipe Caro, Jérémie Gallien, Miguel Díaz, Javier García, José Manuel Corredoira, Marcos Montes, José Antonio Ramos, Juan Correa

Any one of them could have been the winner: I really liked all of the work. HP ended up winning(now that I see the author’s list, they certainly had the numbers on their side!). I get to judge again this year, and am once again looking forward to doing that.

So, back to the hot chocolate and the fuming about municipal services… hmmmm… I wonder if I can convince our mayor to use a bit more operations research?

Journal Impact and Costs

I am a co-editor of a “new” journal Surveys in Operations Research and Management Science published by Elsevier. I’ll write more about that journal and my thoughts about it in another post. I expect to be blasted by some people whose opinions I value about teaming up with a commercial publisher, but I did have my reasons!

I spent time this past weekend in Phoenix at an Elsevier editors conference where there were about 70 editors from a wide variety of fields (lots of medicine and chemistry). During the weekend, there were a number of presentations on things like blogging and electronic paper handling and so on. One session I enjoyed very much was about bibliometrics: measures to determine the impact of a journal. I had kinda known some of this before, but it was interesting to get a synopsis of how these things work.

The standard “impact factor” comes from the different companies that have owned the Science Citation Index and now the ISI Web of Science (Thomson Reuters). Briefly, if you want to calculate the impact factor (IF) of a journal in 2005, you look at the 2005 articles from that journal, add up all the references to those articles published in any ISI journal in 2006 and 2007 (no later), and divide by the number of articles in 2005. There are lots of details to argue over: what is an “article”? What is a “reference”? What journals should be ISI? and so on.

Even the basic structure certainly gives one pause in determining impact. This definition of impact means that all the wonderful citations I get for my old papers on voting in Social Choice and Welfare, a set of papers from the late 80s that are currently in vogue, are never measured for the impact of that journal for any year: they fall outside a two year window. For some very fast moving fields (say, genetics), this two year window might be appropriate. But for others, including operations research I would say, this window seems to measure the wrong things, ignoring the peak for many papers.

Further, there are lots of ways to manipulate this value (I will point out that the Elsevier presenter explicitly stated that journals should not do anything specifically to manipulate any impact factor). I have heard of journals that, upon accepting a paper, provide authors with a list of reference suggestions from that journal within the two year window. “No pressure, mate, but you might consider these references… helps us out a lot, you know!” Pretty slimy in my view, but it is done.

What I found most interesting is that there are other measures of impact, some of which seem to be gaining traction. The most intriguing is a measure that uses the same eigenfactor approach that Google uses in its PageRank. Imagine journals as a network, with edges giving the number of times an article in a journal references the other journal. This gives an influence diagram, and eigenvalues give (in a well defined way) the importance of a node relative to number of references.

It is certainly not clear that number of references is a good proxy for influence, and not every reference is the same. Consider “In the fundamental work of [1], disproving the absurd argument of [2], which built on [3,4,5,6,7,8]”: all those articles are referred to once, but I know which one I would like as my article. But, if you are going to base a measure on counts of references, I would certainly trust an eigenvalue-based approach over a pure counting approach.

The approach, outlined in detail at eigenfactor.com, has the further advantages that it uses a five year window and it ignores journal-level self-citations. The five-year window gives more time for citations to count towards a paper, without giving a huge advantage to older journals. Ignoring self-citations gets rid of the easiest method for a journal editor manipulation. So I like it!

The site eigenfactor.com lets you look at journal eigenfactor and per article influence rankings. There are a couple of different classifications of journals, so let’s look at JCR’s “Operations Research and Management Science” list. The 2007 per article rankings are:

  1. Management Science
  2. Mathematical Programming
  3. Operations Research
  4. Mathematics of OR
  5. Transportation Science

Eigenfactor scores (which measures the overall impact of the journal) moves things around a bit:

  1. European Journal of Operational Research
  2. Management Science
  3. Mathematical Programming
  4. Operations Research
  5. Systems and Control Letters

EJOR is on top since the journal has a good per article impact factor and publishes lots of articles.

INFORMS Journals do pretty well with 4 of the top 5 in the first list and 3 out of five of the second.

What is really neat is to look at the cost to get those eigenfactor values. It would cost $93,408 to subscribe to the 58 journals (this is the individual journal costs: undoubtedly the large publishers bundle their subscriptions, as does INFORMS). Paying the $656 (in 2007) for Management Science is 0.7% of that cost but gets you more than 10% of the total eigenfactor in this field. Subscribing to the top 11 journals in this ranking would cost $5723 (and get you 7 INFORMS journals) and get you more than 1/3 of the total eigenfactor. Adding the 12th would get you European Journal of Operational Research but at $5298 would practically double your cost while increasing your total eigenfactor amount from 37.8% to just 49.4%. Other amazing prices are with Engineering Optimization that costs $4338 for much less than 1% of the field’s eigenfactor and International Journal of Production Research which costs $7684, albeit for 8% of the total eigenfactor.

Now, there are lots of caveats here. Most importantly, while reference numbers are a proxy for impact, they are not equivalent. If you have a paper that applies operations research to a real problem, publishing in Interfaces might have the most impact, even if the journal is ranked 21st by eigenfactor. And when it comes to costs, I am not sure anyone really pays “list price” in this day of aggregation (and prices for individuals are much lower for many journals).

When you are arguing with your librarian on which journals to cut (or, more rarely, add), you might want to look at some of this data. And might I suggest the full suite of INFORMS journals? At $99 for an individual for online access (and under $5000 for institutions), this should give you the recommended daily allowance of eigenfactors at a very affordable price. Makes a great stocking stuffer at Christmas!

Social Engineering for the Overeducated

I got an interesting email today.  Ostensibly from Prof. Jochem Koos (a good Dutch name) from Elsevier, the email gives editorial and review policies for its journals.  After stating that referees must be of the highest quality, the letter then asks potential editors and reviewers to fill out a form to certify their credentials and be added to their list.   The bad part is that it costs $100 to be added to the list.  The good part is that referees are to be paid $30 per page reviewed.  $30!  I just finished a 50 page monster for an Elsevier journal.  I could get $1500?!  Wow!  Does that go for revisions too?  I could get that paper in an endless series of revisions at $1500 per round.  “There is a comma misplaced on page 26.  I will need to see a revision before I recommend publication.”    Ka-ching!  And if I am running a bit short on cash a few reviews of the form “Interesting paper but I would like to see it … 13 pages longer” should be just the thing for making the mortgage payment.

Of course, it is all a ruse.  Elsevier does not pay $30/page for refereeing (at least, not in operations research) and you don’t have to pay $100 to get on an approved list to referee.

It is surprising that a group that overwhelmingly has PhDs can be a target for such a scam.  On the other hand, considering some of the people I have met in my professional life, perhaps it is not that surprising.

Bottom line:  don’t send off $100 to just anyone with a gmail address.  Elsevier has some more information on these sorts of emails.