Different Mores for Different Fields

In the wake of the discussion of how different fields have different measures of evaluation (a view I am not 100% on board with: if a subgroup chooses a method of evaluation antithetical to the mores of the rest of academe, don’t be surprised if the group gets little respect outside their narrow group), it was interesting to flip through a recent issue of Nature (thanks Ilona!). In addition to a fascinating article on the likelihood of Mercury colliding with the Earth in the next 3 billion years or so (about 1/2500 if I read things correctly), it was interesting to note the apparently required paragraph for co-authored papers:

J.L. designed the study, performed the simulations, and their analysis and wrote the paper. M.G. wrote the computer code.

(other articles with more coauthors divvy up the work in more detail).

We don’t do this in operations research (at least as far as I have seen) and I have made a point of always going with alphabetical author listing (which generally puts me last, though I have sought out co-authors Yildiz, Yunes, and Zin recently) which has the aura of equal participation, even in cases where the participation is not so equal. Other people try to order by contribution, though it is unclear what metric to use in such a case. In promotion and tenure, we typically (at our school) do not try to parse out individual contributions to papers, though we do discuss individual strengths and weaknesses.

I think this sort of paragraph would actually be a boon to our literature. It would force some people to think about why they are really part of a paper, and add honesty to the system. Of course, it also adds to the arguing and power struggle that can arise in research collaborations.

Conference Proceedings are Not Enough

In much of operations research, a conference is simply an opportunity to give a talk on recent research.  At INFORMS, EURO, IFORS and many other conferences, there are no printed proceedings, and no real record of what was presented in a talk.  While giving a talk is useful, it doesn’t really count for much in most promotion and tenure cases.  If you want to continue in academic OR, you need to publish papers, generally in the “best” journals possible.

However, in some parts of OR, particularly those parts that overlap with CS, conference presentations are much more competitive and prestigious.  In my own area, conferences such as CP, CPAI-OR, PATAT, MISTA, INFORMS-Computing and a few others are competitive to present at.  A full (15 page or so) or short (5 page) paper must be submitted, and these are reviewed (with varying amounts of rigor).  Acceptance rates can range as low as 20%, and are rarely above 40%.   The papers are then published either in a book on their own or in a series such as Lecture Notes in Computer Science.   These do “count” towards promotion and tenure, and researchers who can consistently get accepted at these conferences are very well thought of.

This has led, however, to some researchers (and entire swathes of some subfields) simply not publishing in archival journals.  I have seen resumes from some very good researchers that have essentially no journal papers.  I can understand the reasons:  journal publishing is a slow and frustrating process (and I am part of that problem, though I am getting better at refereeing and editorial roles!).  Further, since journals will typically not publish verbatim versions of papers published at conferences, new things must be added.  It is unappealing to go back to the topic just to leap over a journal publication barrier.

But I think it is necessary to publish in journals where the refereeing is generally more thorough and the page limits are such that topics can be thoroughly explored.  Samir Khuller at the Computational Complexity blog has made a similar argument (thanks to Sebastian Pokutta for the pointer):

Its very frustrating when when you are reading a paper and details are omitted or missing. Worse still, sometimes claims are made with no proof, or even proofs that are incorrect. Are we not concerned about correctness of results any more? The reviewing process may not be perfect, but at least its one way to have the work scrutinized carefully.

Panos Ipeirotis, whose blog is titled “A Computer Scientist in a Business School” has objected to this emphasis on journal papers:

Every year, after the Spring semester, we receive a report with our annual evaluation, together with feedback and advice for career improvement (some written, some verbal). Part of the feedback that I received this year:

  1. You get too many best paper awards, and you do not have that many journal papers. You may want to write more journal papers instead of spending so much time polishing the conference papers that you send out.
  2. You are a member of too many program committees. You may consider reviewing less and write more journal papers instead.
I guess that having a Stakhanovist research profile (see the corresponding ACM articles) is a virtue after all.

Panos also has an interesting proposal to get rid of acceptance/rejection completely.

I have mixed feelings on this.  On one hand, conferences work much more efficiently and effectively at getting stuff out (there is nothing like a deadline to force action).  On the other hand, having watched this process for both conferences and journals, I am much more confident in stuff published in journals (by no means 100% confident, but more confident).  Too many conference papers dispense with proofs (and have, in fact, incorrect results) for me to be happy when only conference papers are published.

Finally, in a business school at least, but I believe also in industrial engineering, promotion and tenure cases need to be made outside the field to people who are still overwhelmingly journal oriented.  I would rather spend my time explaining a paper and saying why it is great than justifying the lack of journal publications as a field-specific phenomenon that should not be held against the candidate.

So publish that journal paper!

Humanitarian Operations Research

Two and a half years ago, I spent a sabbatical year in New Zealand.  I had a great year, and very much enjoyed the vibrant research life at the University of Auckland, and the even more interesting life of living in New Zealand (you can check out my blog from the year, and perhaps especially some pictures from the house we lived in).  And the research was good, allowing me a chance to finish some things I was working on and to start some new things.

Despite the success of the year, I have had a nagging feeling that I could have done something more … useful in the year.  Does the world really need a slightly better soccer schedule?  Are my new thoughts on logical Benders’ approaches really important?

Before I left for New Zealand, I had been talking with some people from Bill Clinton’s foundation who worked on AIDS/HIV issues.  In the AIDS world, “operations research” has a different meaning than the meaning in my world.  In the AIDS world,  it means designing tests of alternative approaches and evaluating the results of those tests.   I would call that statistical experimental design.  But the Clinton people really understood what “real” operations research could provide:  more effective allocation of scarce resources.    We had some good discussions and I pointed them to people who knew far more about this area than I did.

It was only later that I thought:  “Maybe I should spend a sabbatical year looking at AIDS/HIV issues”.  Then, in discussions with people like Luk Van Wassenhove, I learned more about the work done in “Humanitarian Operations Research”.    I think next time I have an extended period away from teaching and administrative responsibilities, I will think about how I might make the world a better place through operations research.

Until then, let me do my little bit to help advertise that side of the field.  Three faculty members from Georgia Tech (Özlem Ergun, Pinar Keskinocak, and Julie Swann) are soliciting papers for a special issue of Interfaces on the topic “Humanitarian Applications: Doing Good with Good OR”.  If you are doing work that is having a positive effect on the world, you might consider submitting to the special issue.  From the call for papers:

This special issue focuses on humanitarian applications of operations research (OR) and management science (MS) models and methods in practice, or “Doing Good with Good OR.” Examples of research topics include planning and response to largescale disease outbreaks, such as pandemic influenza,
improved logistics for reaching earthquake victims, implementation of new energy-market structures to enable greater distribution, solutions for fair and sustainable water allocation, more accurate prediction of hurricane paths and devastation, prevention of terrorist attacks through algorithmic identification of perpetrators, and reduction of poverty through new market mechanisms. Appropriate papers include descriptions of practice and implementation of OR/MS in industry, government, nongovernmental organizations, and
education.

The due date for submissions is June 15.  I look forward to the issue very much.

Closed Loop Supply Chains

There is a new paper on the OR Forum by Dan Guide and Luk Van Wassenhove that looks at the research trajectory of “Closed Loops Supply Chains”.  Closed loop supply chains are supply chains where there is at least as much interest in getting things from the customer to the supplier as vice versa.  Sometimes the drive for this is environmental (think European electronics laws to try to reduce metals in the refuse system) and some is economic (think of a printer manufacturer getting back used cartridges to try to cut down on the refill market, or firms that restore used items for further sale).  Luk and Dan’s paper is a nice, personal, view of the research that has gone on in the last years.

For about eight years (1997-2005), I headed up the Carnegie Bosch Institute.  Part of what we did was sponsor conferences and workshops on emerging topics in international management.  One of our success stories was early support for closed loop supply chains (or reverse logistics).  I am really pleased to see how the field has developed.

New blog and new journal

A new blog by Bill Hart of Sandia National Labs reminds me that there is also an exciting new journal about to begin.  From Bill’s blog:

I have recently joined the editorial board of the new journal Mathematical Programming Computation, which publishes original research articles that are at the intersection of math programming and computing. This journal reflects the growing role of computation in operations research, where real-world applications often require the application of complex software packages to analyze mathematical models.

I too am involved in the new journal, as one of the members of the advisory board.  If you have some computational work looking for a good home, be sure to consider this journal.  With Bill Cook of Georgia Tech as editor, I am expecting it to be a great success.

Citations in Management Science and Operations Research

The Tepper School, in its promotion and tenure cases, has had more conversation about (if not emphasis on) citation counts for papers. This is partially a “Google Scholar” effect: the easier it is to find some data, the more people will rely on that data. Those of us who bring notebook computers to the tenure cases can immediately add new “data” to the discussion. I fear I have been a big part of this trend here, using Google Scholar as a quick measure of “effect”. I have even written about this on the blog, in an effort to find highly cited OR papers. The software “Publish or Perish” by Harzing.com has been invaluable in this regard: it generates results from Google Scholar, collates them, combines likely double entries, and sorts them in a number of ways. Through them, I can learn immediately that my h index is 19 (not quite: it doesn’t combine some papers, so my h index is closer to 16), and that a paper Anuj Mehrotra and I wrote on graph coloring is my highest cited “regular” paper, and that paper is the third most cited paper ever in INFORMS Journal on Computing. I can even search the citations of my nemeses (“Hah! I knew that paper was never going to lead to anything!”). What a great way to spend an afternoon!

But does any of this mean anything? In the current (March-April 2008) issue of Interfaces, Malcolm Wright and J. Scott Armstrong take a close look at citations in an article entitled “Verification of Citations: Fawlty Towers of Knowledge?” (Interfaces, 38(2): 125-139). They talk about three types of errors (and I recognize the risks that this blog summary may end up committing some of these errors, including the count of errors! [In fact, the initial post of this entry misspelled the title.]):

  1. Failure to include relevant studies
  2. Incorrect references
  3. Quotation errors

Much of the article involves a paper written by Armstrong and Overton in 1977 on overcoming non-response bias in surveys. The overlap in authors means that the authors probably understand what the paper meant to say but the authors may have a certainly lack of objectivity on the subject. Despite the objectivity issue, the article makes for stunning reading.

The most persuasive of the arguments regards “Quotation Errors”. While it is not new to note that many authors don’t read all of the papers in their references, it is amusing to see how many people can’t even get the basic ideas right:

A&O is ideal for assessing the accuracy of how the findings were used because it provides clear operational advice on how to constructively employ the findings. We examined 50 papers that cited A&O, selecting a mix of highly cited and recently published papers. …

Of the articles in our sample, 46 mentioned differences between early and late respondents. This indicates some familiarity with the consequences of the interest hypothesis. However, only one mentioned expert judgment, only six mentioned extrapolation, and none mentioned consensus between techniques. In short, although there were over 100 authors and more than 100 reviewers, all the papers failed to adhere to the A&O procedures for estimating nonresponse bias. Only 12 percent of the papers mentioned extrapolation, which is the key element of A&O’s method for correcting nonresponse bias. Of these, only one specified extrapolating to a third wave to adjust for nonresponse bias.

The paper was also not referred to correctly in many cases:

We examined errors in the references of papers that cite A&O. To do this, we used the ISI Citation Index (in August 2006). We expected this index to underrepresent the actual error rate because the ISI data-entry operators may correct many minor errors. In addition, articles not recognized as being from ISI-cited journals do not have full bibliographic information recorded; therefore, they will also omit errors in the omitted information. Despite this, we found 36 variations of the A&O reference. Beyond the 963 correct citations, we found 80 additional references that collectively employed 35 incorrect references to A&O. Thus, the overall error rate was 7.7 percent.

Their discussion of “missing references” was not convincing to me (though it is unclear how to do this in an objective way). The authors did some google searches and checked how often some of their key ideas were missing. Since they found about a million results for “(mail OR postal) and survey” AND (results OR findings), and only 24,000 of those mention (error OR bias), of which only 348 mention Armstrong OR Overton, they conclude that their work on bias is not well represented in real surveys. It doesn’t take much experience with google search to believe that the baseline of a million pages does not correspond to one million surveys (and the vast majority of internet surveys have zero interest in accuracy to the level of A&O). Their work with Google Scholar had similar results, and I have similar concerns over the relevance of the baseline search. But I certainly believe this qualitatively: there are many papers that should provide more relevant references (particularly to my papers!).

The authors have a solution to the “quotation error” issue that is both simple and radical:

The problem of quotation errors has a simple solution: When an author uses prior research that is relevant to a finding, that author should make an attempt to contact the original authors to ensure that the citation is properly used. In addition, authors can seek information about relevant papers that they might have overlooked. Such a procedure might also lead researchers to read the papers that they cite. Editors could ask authors to verify that they have read the original papers and, where applicable, attempted to contact the authors. Authors should be required to confirm this prior to acceptance of their paper. This requires some cost, obviously; however, if scientists expect people to accept their findings, they should verify the information that they used. The key is that reasonable verification attempts have been made.
Despite the fact that compliance is a simple matter, usually requiring only minutes for the cited author to respond, Armstrong, who has used this procedure for many years, has found that some researchers refuse to respond when asked if their research was being properly cited; a few have even written back to say that they did not plan to respond. In general, however, most responded with useful suggestions and were grateful that care was taken to ensure proper citation.

Interesting proposal, and one most suitable for things like survey articles that attempt to cover a field. I am not sure how I would react to requests of this type: I suspect such requests might fall through the cracks amongst all the other things I am doing. But if the norms of the field were to change…

The article has a number of commentaries. I particularly liked the beginning of the Don Dillman article:

In 1978, I authored a book, Mail and Telephone Surveys: The Total Design Method (Dillman 1978). According to the ISI Citation Indexes, it has now been cited in the scientific literature approximately 4,000 times. When reviewing a summary of its citations, I discovered citations showing publication dates in 24 different years, including 1907 (once) and 1908 (three times). Citations erroneously listed it as having been published in all but three of the years between 1971 and 1995; there were 102 such citations. In addition, 10 citations showed it as having been published in 1999 or 2000. I attribute the latter two years to authors who intended to cite the second edition— although I had changed the title to Mail and Internet Surveys: The Tailored Design Method (Dillman 2000).

I discovered 29 different titles for the book, including mail descriptors such as main, mall, mial, mailed, and mailback. The telephone descriptor also had creative spellings; they included telephon, teleophone, telephones, telephone, and elephone. Not surprisingly, my name was also frequently misspelled as Dillon, Dilman, Dill, and probably others than I was unable to find. I also discovered that I had been given many new middle initials. A similar pattern of inaccuracies has also emerged with the second edition of the book.

I do believe that technology can help with some of these issues, particularly with incorrect references. But including relevant references and correctly summarizing or using cited references is an important part of the system, and it is clear that the current refereeing system is not handling this well.

Papers and commentary like this are one reason I wish Interfaces was available to the wider public ( I found an earlier version of the base article here, but no link to the commentaries), even if just for a short period.  The article and commentaries are worth searching around for.

ISI and Conferences

ISI from Thompson Scientific might be seen as just another scientific article indexing service, just like Google Scholar, Citeseer and many others. But it has a much stronger effect: many universities only “count” ISI-indexed publications. In mainstream operations research, this doesn’t have a very strong effect. Most well-known OR journals are ISI-indexed, and those that are not strive greatly for such indexing. But in some of our newer fields, particularly those related to computer science, this is much more of an issue. In constraint programming, conferences are key outlets for research, particularly such conferences as CP and CP/AI-OR. Both of these conference publish their papers in Springer’s Lecture Notes in Computer Science, which up until recently was ISI-indexed.

The ISI people have recently dropped LNCS and other series from their main ISI-indexing, and created a “conference” version of ISI-indexing. For some, this can have a very strong effect on promotion and tenure.

I have somewhat mixed feelings about this. At one level, I am worried that the ISI-“page count” available for some of our fields has decreased greatly. This will make it harder for some of our subfields to grow and expand. On the other hand, conference volumes are just part of the proliferation of research outlets, and not ISI-indexing them seems appropriate given the sometimes quick level of review that some conferences provide. It is certainly true that the conferences provide reasonably thorough reviews and the rejection rate is high (70% or more rejected), but this still seems different than journals. Perhaps it is just me: as a member of a conference committee, I feel it is my responsibility to review papers somewhat farther from my core interests than I would for journal reviews. As such, I can’t provide the same insights for conferences that I do for journals.

Much of OR doesn’t have this problem. Most OR conferences are “free-for-alls” with no veneer of reviewing. This makes for large conferences of highly-variable quality. It is great for talking about not-fully-formed ideas, or interesting but not publishable results. I wonder if some of the competitive conferences will change in flavor now that the “carrot” of an ISI-indexed paper is not offered. At the very least, people should be working harder to follow up the conference with a journal publication, which seems a good outcome.

Looking for an Editor

ITORInternational Transactions of Operational Research is looking its next editor. I chair the search committee. We just did a review of the journal, and I think it offers an interesting opportunity to the right person. The key is trying to make the journal not just a “me-too” journal, taking the rejects from higher-ranked journals. The issue is finding the right niche. Since the journal is sponsored by IFORS, making the journal a key outlet for “international operations research” seems a very promising direction.

But what is “international operations research”? Lots of OR seems country/culture-independent. Certainly much of “mathematical programming”-type operations research does not seem “international” in any way. But there are issues of particular interest to developing countries that seem quite international. And there are many topics in international management that seem to fit (models in international operations for instance).

I guess I am hoping that someone out there will come up with a convincing vision of what “International OR” could be and how ITOR could play an important role in achieving that vision.

IFORS feels strongly about wanting the journal to succeed. The Administrative Committee has budgeted some funds for the activities that generate journal material (workshops, datasets, etc.) to help make the journal a success.

Here is the full call for nominations:

Continue reading “Looking for an Editor”