In much of operations research, a conference is simply an opportunity to give a talk on recent research. At INFORMS, EURO, IFORS and many other conferences, there are no printed proceedings, and no real record of what was presented in a talk. While giving a talk is useful, it doesn’t really count for much in most promotion and tenure cases. If you want to continue in academic OR, you need to publish papers, generally in the “best” journals possible.
However, in some parts of OR, particularly those parts that overlap with CS, conference presentations are much more competitive and prestigious. In my own area, conferences such as CP, CPAI-OR, PATAT, MISTA, INFORMS-Computing and a few others are competitive to present at. A full (15 page or so) or short (5 page) paper must be submitted, and these are reviewed (with varying amounts of rigor). Acceptance rates can range as low as 20%, and are rarely above 40%. The papers are then published either in a book on their own or in a series such as Lecture Notes in Computer Science. These do “count” towards promotion and tenure, and researchers who can consistently get accepted at these conferences are very well thought of.
This has led, however, to some researchers (and entire swathes of some subfields) simply not publishing in archival journals. I have seen resumes from some very good researchers that have essentially no journal papers. I can understand the reasons: journal publishing is a slow and frustrating process (and I am part of that problem, though I am getting better at refereeing and editorial roles!). Further, since journals will typically not publish verbatim versions of papers published at conferences, new things must be added. It is unappealing to go back to the topic just to leap over a journal publication barrier.
But I think it is necessary to publish in journals where the refereeing is generally more thorough and the page limits are such that topics can be thoroughly explored. Samir Khuller at the Computational Complexity blog has made a similar argument (thanks to Sebastian Pokutta for the pointer):
Its very frustrating when when you are reading a paper and details are omitted or missing. Worse still, sometimes claims are made with no proof, or even proofs that are incorrect. Are we not concerned about correctness of results any more? The reviewing process may not be perfect, but at least its one way to have the work scrutinized carefully.
Panos Ipeirotis, whose blog is titled “A Computer Scientist in a Business School” has objected to this emphasis on journal papers:
Every year, after the Spring semester, we receive a report with our annual evaluation, together with feedback and advice for career improvement (some written, some verbal). Part of the feedback that I received this year:
- You get too many best paper awards, and you do not have that many journal papers. You may want to write more journal papers instead of spending so much time polishing the conference papers that you send out.
- You are a member of too many program committees. You may consider reviewing less and write more journal papers instead.
I guess that having a Stakhanovist research profile (see the corresponding ACM articles) is a virtue after all.
Panos also has an interesting proposal to get rid of acceptance/rejection completely.
I have mixed feelings on this. On one hand, conferences work much more efficiently and effectively at getting stuff out (there is nothing like a deadline to force action). On the other hand, having watched this process for both conferences and journals, I am much more confident in stuff published in journals (by no means 100% confident, but more confident). Too many conference papers dispense with proofs (and have, in fact, incorrect results) for me to be happy when only conference papers are published.
Finally, in a business school at least, but I believe also in industrial engineering, promotion and tenure cases need to be made outside the field to people who are still overwhelmingly journal oriented. I would rather spend my time explaining a paper and saying why it is great than justifying the lack of journal publications as a field-specific phenomenon that should not be held against the candidate.
So publish that journal paper!
I’m curious the opinion of the value placed on the journal papers. Is the goal of the journals to drive academic research and awareness to the scientific community? Or is the value to drive dollars in grants and subsidies back to the research institution? I could argue, Michael, that you are executing on driving academic research with conferences and committee participation but not contributing to the real money stream. Is that so bad?
I’m not in academics so I might have totally missed the mark. It does sound like an interesting debate.
Here is the issue: *Impact* is a field-specific phenomenon. So evaluating someone in field A using the criteria used to evaluate someone in field B is de facto wrong.
I remember talking with my professor of Russian when I was graduating with my PhD. She was asking me what publisher I am targeting to publish my book now that I start my tenure-track career. When I told her that I was targeting some journals and that my field does not really care about books, she was very surprised! How can someone get tenure without publishing a book? Journal articles are just pieces that should lead to a comprehensive piece of work (the book). How can someone only publish in journals, without a book, and be seriously considered a scholar?
It is in our nature to judge someone else’s success using our own “success” criteria. It is easy, convenient, and we do not have to move out of our comfort zone. Unfortunately this is wrong. Accepting interdisciplinary research means also understanding how other fields operate, and going beyond the “mu field is better than su field” mentality.
Panos is absolutely correct that review standards for academics need to be field-specific. My take on the conference paper v. journal issue, though, is that it boils down to rigor of peer review (which I think is also the heart of Mike’s point). Particularly when something is published/archived for future researchers to read and build upon, I’d like some measure of confidence that it is correct. There are certainly papers in “good” academic journals that are badly flawed (I have a favorite that once I threw at students in an inventory theory seminar, just to see if they could tell the wheat from the chaff), but my experience is that conference papers are on average considerably more wobbly than journal articles.
The notion of publishing every submitted paper, along with reviews, rather than using acceptance/rejection is interesting, but there are serious problems. It is unworkable for print journals, but could be tried with online journals. Among other concerns, though, it would probably make my life harder as a reader. In the current system, I put a certain amount of blind faith in a journal’s ability to control quality of papers published. I can generally get a pretty good idea of whether a paper is worth my time by reading the abstract (assuming it’s in a “good” journal). With the open system, I would have to read not just the abstract but all the reviews, weigh the perceived quality of the reviews, and form my own editorial judgment, which is frankly more work than I prefer to do.
I think that faster reviews for OR journal submissions would go a long way toward encouraging authors to submit journal papers.
A major benefit of conference publications over journals articles is the firm deadline for review and publication.
Reviewers for medical journals are generally required to submit reviews by a deadline, either two or four weeks after receiving the paper. If a review is not complete, the reviewer has lost the opportunity to comment and the paper is sent to another reviewer. I am lately submitting only to medical and quasi-medical journals (my application area is organ allocation). The prospect of waiting about a year to receive reviews of a paper sent to an OR journal is too depressing. I can see my work in print within three or four months of submission by going with journals that are running a tight ship on the review process.
The published guidelines for Operations Research state that only after seven months of delay will a paper be reassigned to a new reviewer. One month would be more appropriate. However, it is absurd that with a policy like this, Operations Research tells authors to expect reviews after four months. My paper’s first review took nine months; I turned around the paper in three weeks; and the second look took eight months. If papers aren’t reassigned until seven months have passed, then year-long delays are likely commonplace.
Paul, we agree: At the very core of the issue is the quality of reviewing. I know that in OR the quality of reviewing in conferences is significantly more superficial compared to the reviewing for OR journals, and this leads Mike to believe that this applies to CS as well, which is a “sister” field. (Btw, most selective CS conferences have acceptance rate below 20% and often below 10%; 30% and above is not considered selective.)
However, for CS journals, I found journal reviewing to be *significantly* easier compared to the reviewing in competitive conferences. Once something is published in a competitive conference, getting the journal paper published is a mere formality. Combine that with the delay in publication typically associated with journals, the journal paper ends up being a paper with no impact, as people that were interested in the research have already read and cite the original conference paper.
Fundamentally, the reason for publishing is to reach as many people as possible, have an impact, and help fellow researchers progress. Satisfying the academic bureaucracy is a side-product of this process, not the goal.
Panos wrote in a comment on another thread: let me copy it here since it seems more appropriate.
“if a subgroup chooses a method of evaluation antithetical to the mores of the rest of academe, don’t be surprised if the group gets little respect outside their narrow group”
Perhaps I can give a counter-argument that may convince you for the opposite and that making comparisons across fields is unfair and often results in strange outcomes.
A few years back, my school decided to classify journals as “top” journals or not. (Top researchers publish in top journals, right?)
So, how do you rank journals? Well, the committee decides to objective and fair, and avoid personal biases. So, they use the impact factor for this purpose. What is the result? We realize that our Operations group is at the absolute bottom, publishing only in low-quality journals, with significantly lower impact factors than the impact factors of the journals for the rest of the school. Only a couple of OR journals have impact factors above 2, and many have impact factors below 1. Horrible! In contrast, you have Econometrica with impact factor close to 5 and other Economics journals with impact factors close to 4-5. Even when searching across all OR journals, we could not locate OR journals with high impact factors.
What does this mean? That OR researchers are of low quality and publish only in low-quality journals compared to the economists? Economists thought so.
Panos suggested that my views come from my experience in OR conferences. That is not true: I, like Panos, live at the intersection of CS and business, though I am more aligned with OR than CS. But I have been on the conference committees of the CS-oriented conferences CP, CPAIOR, PATAT, MISTA, ADT, AICS, COMSOC, and SoCS, just to list my current Easychair conferences. So I do have a pretty good view on how this sausage is made (and I also know, and know the drawbacks of, journal publication).
When I am speaking dean-speak, which I do periodically, I talk about the need to prove impact, and the necessity for evaluation committees to be open to alternative views of impact. And when I write letters for tenure candidates with non-journal oriented vitaes, I can write a pretty persuasive letter on how to evaluate that impact (hint: I don’t follow the process that Panos talks about in his previous comment).
But for those working outside of pure CS, being evaluated by those outside the field, it is a heck of lot easier to write a persuasive letter if there are some journal publications to back things up. And, as the original pointer (to a “pure” CS blogger) discussed, there are things in journal publications that don’t currently appear in conference publications. Fortunately, as Panos says, I guess journal publications are pretty easy to get.
So I’ll stand by my assertion that people should still be publishing in journals. But I am not confident that I’ll feel the same way in five years.
I’m sympathetic to the comment from “junior faculty” about review times, but a while back I did a stint as an AE for a quant journal, and it was a bit of an eye-opener to see how hard it was to get qualified reviewers to do reviews at all, let alone promptly. Reviewing is uncompensated, not necessarily highly valued at promotion/raise time, and time-consuming (if done well). You may (and I stress “may”) get an atta-person from the journal. So editors are loathe to clamp down on referees for fear of scaring them off. Which is not to defend the current operation of the system — it’s just not as easy as it looks to fix it.
Impact factor remains a mystery to me. Personally, I’m inclined to scale the impact factor by the average length of the bibliography in a paper in the area. (Ever look at how long the reference list for a paper in OB is?) Even so, a while back I looked at impact factors just for OR-type journals (ones I’ve published in, or might some day). A journal that is low on my prestige list — I won’t name it so as not to offend them — had an impact factor well above most of what I thought were better ones (including, I think, above MS and OR). Maybe that means the articles are accessible to a wider audience?
Incidentally, I once published in that high-impact/low-prestige journal. It’s not a paper I think of as particularly impressive in any sense, but it’s easily my most frequently cited paper. I somehow take no comfort in that.
Hi all,
Firstly consider I’m not in the academia (full time, I’m just Adjunct Professor; I’d like to say I’m not a professional professor but a professor profesional).
I’ve always seen the process of publishing rather stimulating to those looking tenure; stimulating because of the motto ‘publish or perish’. However, this situation has created perverse incentives. What I would suggest is a more comprehensive system, where someone should be using seminars, workshops and conferences as a proof of concept for their ideas, papers, and dissertations. Then, go for it and publish it. However, don’t stop there and (considering the real life applicability of the area of expertise) try to apply it and prove the impact. Want a tenured position?: follow the track.