Different Mores for Different Fields

In the wake of the discussion of how different fields have different measures of evaluation (a view I am not 100% on board with: if a subgroup chooses a method of evaluation antithetical to the mores of the rest of academe, don’t be surprised if the group gets little respect outside their narrow group), it was interesting to flip through a recent issue of Nature (thanks Ilona!). In addition to a fascinating article on the likelihood of Mercury colliding with the Earth in the next 3 billion years or so (about 1/2500 if I read things correctly), it was interesting to note the apparently required paragraph for co-authored papers:

J.L. designed the study, performed the simulations, and their analysis and wrote the paper. M.G. wrote the computer code.

(other articles with more coauthors divvy up the work in more detail).

We don’t do this in operations research (at least as far as I have seen) and I have made a point of always going with alphabetical author listing (which generally puts me last, though I have sought out co-authors Yildiz, Yunes, and Zin recently) which has the aura of equal participation, even in cases where the participation is not so equal. Other people try to order by contribution, though it is unclear what metric to use in such a case. In promotion and tenure, we typically (at our school) do not try to parse out individual contributions to papers, though we do discuss individual strengths and weaknesses.

I think this sort of paragraph would actually be a boon to our literature. It would force some people to think about why they are really part of a paper, and add honesty to the system. Of course, it also adds to the arguing and power struggle that can arise in research collaborations.

Yet Another Name for Our Field

I was wandering through the Internets and came across a new blog from Palisade.com, makers of @RISK and other software. It was a nice, relevant blog with some good stories, particularly on Monte Carlo simulation. I was taken aback by the name of the blog, however: “Operation Research”. I was a little confused for a bit and had to check back with my own blog to be sure that it is “Operations Research” normally in the US. In much (but not all) of the rest of the world, it is “Operational Research”. I am off to Bonn in a couple of days for the EURO meeting. During part of my time there, I will be attending board meetings for IFORS, where I try hard to use the term “Operational Research”.

But “Operation Research”? That just sounds wrong. It is hard to see how often that phrase occurs on web pages, since Google does some correction for you, but a search does show that it occurs, though often as a clear misspelling. I am not sure it is an improvement, but does represent something that could be used when the “operations/operational” conflict gets too much.

Grumpy Wikipedians

My experience with Wikipedia has been mixed, at best, particularly in the Operations Research area. Arguing with some non-OR person about what OR is has its advantages: it forces a rethink of one’s beliefs. But it can be frustrating, since it is not clear who you are discussing changes with or what their goals and interests are. And, if you are active in societies, blogs, and so on as I am, opportunities for “conflict of interest” abound. So a couple of years ago I decided to not edit operations research aspects of Wikipedia, leaving it to others. I am grateful that others work to make the entry better, though it is still far, far too historically based for my happiness.

My mixed experience with the (non-OR) denizens of Wikipedia might not have been unusual. Francisco Marco-Serrano of the blog FM Waves pointed me to an article in New Scientist about grumpy and close-minded Wikipedians:

Disagreeable and closed to new ideas – that’s the picture that emerges of contributors to community-curated encyclopaedia Wikipedia from a survey of their psychological attributes.

Led by Yair Amichai-Hamburger of the Sammy Ofer School of Communication in Herzliya, Israel, a team of psychologists surveyed 69 Israeli contributors to the popular online encyclopedia, comparing them with a sample of 70 students matched for age and intensity of internet use.

As Amichai-Hamburger expected, the Wikipedians were more comfortable online. “They feel the internet is a more meaningful place to them,” he says. But to his surprise, although Wikipedia is founded on the notion of openly sharing and collecting knowledge as a community, they scored low on agreeableness and openness.

“Wikipedia in a way demonstrates the spirit of the internet,” Amichai-Hamburger says. “People contribute without any financial reward.”

Amichai-Hamburger speculates that rather than contributing altruistically, Wikipedians take part because they struggle to express themselves in real-world social situations. “They are compensating,” he suggests. “It is their way to have a voice in this world.”

Of course, the same might be found out about blog writers!

Netflix Prize ready to finish?

While I have been somewhat skeptical of the Netflix Prize (in short: it seems to be showing how little information is in the data, rather than how much; and the data is rather strange for “real data”), it is still fascinating to watch some pretty high powered groups take a stab at it. If I understand the standings correctly, “BellKor’s Pragmatic Chaos”, a group consisting of people from AT&T, Yahoo! Research, and two companies I am not familiar with, Commendo and Pragmatic Theory have passed the 10% improvement, which means we are now in the final 30 days to determine the overall winner. I wonder if anyone has a killer model hidden for just this time.

New Blogs and Welcome Graham!

On my sidebar, I try to keep track of all the operations research oriented blogs. There are still few enough that I think I can keep a complete list (even allowing for a pretty broad view of operations research). The advantage of being on the list is that new posts on each of those blogs show up on my “From the OR Blogs”. Further, many of the posts are fodder for my twitter stream, which reaches literally dozens of people! So, if you are posting in the blogORsphere, and I don’t list you, please let me know: I am not meaning to ignore you (though if you don’t post for 2 months, you go onto my “inactive” list, so keep the posts coming).

On that note, let me welcome Graham Kendall, who has begun Research Reflections. Graham runs the MISTA conference series that will take place next in Dublin in August. Graham is a good friend of mine, even if he did dump me during a conference, forcing me to listen to a very boring lecture on art when I could have been enjoying a pint with him in a congenial pub (there were extenuating circumstances: my attention wandered during the critical “let’s get the heck out of here” moment). So I have forgiven him that, and recommend to you both his blog and the MISTA conference (of which I am part of the advisory committee, so I have some biases here).

And please check out all of the OR Blogs, and the “From the OR Blogs” in the sidebar (both of which appear if you go to the main page of this blog). There is a lot of great stuff out there.

Conference Proceedings are Not Enough

In much of operations research, a conference is simply an opportunity to give a talk on recent research.  At INFORMS, EURO, IFORS and many other conferences, there are no printed proceedings, and no real record of what was presented in a talk.  While giving a talk is useful, it doesn’t really count for much in most promotion and tenure cases.  If you want to continue in academic OR, you need to publish papers, generally in the “best” journals possible.

However, in some parts of OR, particularly those parts that overlap with CS, conference presentations are much more competitive and prestigious.  In my own area, conferences such as CP, CPAI-OR, PATAT, MISTA, INFORMS-Computing and a few others are competitive to present at.  A full (15 page or so) or short (5 page) paper must be submitted, and these are reviewed (with varying amounts of rigor).  Acceptance rates can range as low as 20%, and are rarely above 40%.   The papers are then published either in a book on their own or in a series such as Lecture Notes in Computer Science.   These do “count” towards promotion and tenure, and researchers who can consistently get accepted at these conferences are very well thought of.

This has led, however, to some researchers (and entire swathes of some subfields) simply not publishing in archival journals.  I have seen resumes from some very good researchers that have essentially no journal papers.  I can understand the reasons:  journal publishing is a slow and frustrating process (and I am part of that problem, though I am getting better at refereeing and editorial roles!).  Further, since journals will typically not publish verbatim versions of papers published at conferences, new things must be added.  It is unappealing to go back to the topic just to leap over a journal publication barrier.

But I think it is necessary to publish in journals where the refereeing is generally more thorough and the page limits are such that topics can be thoroughly explored.  Samir Khuller at the Computational Complexity blog has made a similar argument (thanks to Sebastian Pokutta for the pointer):

Its very frustrating when when you are reading a paper and details are omitted or missing. Worse still, sometimes claims are made with no proof, or even proofs that are incorrect. Are we not concerned about correctness of results any more? The reviewing process may not be perfect, but at least its one way to have the work scrutinized carefully.

Panos Ipeirotis, whose blog is titled “A Computer Scientist in a Business School” has objected to this emphasis on journal papers:

Every year, after the Spring semester, we receive a report with our annual evaluation, together with feedback and advice for career improvement (some written, some verbal). Part of the feedback that I received this year:

  1. You get too many best paper awards, and you do not have that many journal papers. You may want to write more journal papers instead of spending so much time polishing the conference papers that you send out.
  2. You are a member of too many program committees. You may consider reviewing less and write more journal papers instead.
I guess that having a Stakhanovist research profile (see the corresponding ACM articles) is a virtue after all.

Panos also has an interesting proposal to get rid of acceptance/rejection completely.

I have mixed feelings on this.  On one hand, conferences work much more efficiently and effectively at getting stuff out (there is nothing like a deadline to force action).  On the other hand, having watched this process for both conferences and journals, I am much more confident in stuff published in journals (by no means 100% confident, but more confident).  Too many conference papers dispense with proofs (and have, in fact, incorrect results) for me to be happy when only conference papers are published.

Finally, in a business school at least, but I believe also in industrial engineering, promotion and tenure cases need to be made outside the field to people who are still overwhelmingly journal oriented.  I would rather spend my time explaining a paper and saying why it is great than justifying the lack of journal publications as a field-specific phenomenon that should not be held against the candidate.

So publish that journal paper!

Visualization of Visualizations

Stuart Mitchell, a buddy from my New Zealand year (and I hope soon a coauthor), passed along a neat “Periodic Table of Visualization Methods” from visual-literacy.org.  If you mouse over each box, you get a quick picture of a particular type of visualization.  Given my own biases, I am very taken with the “information visualizations” and less so with the “compound visualizations” (which look like USA Today‘s “graphics” gone mad).  There are a lot more ways to visualize things, but it is neat to see this collection.

INFORMS: 30,000 members or 5,000?

When I was elected President of INFORMS in 2000 (my Presidential Year was 2002:  they ease you into the job!), I was very proud to become President of a 14,000 member society (at the age of 42:  don’t let the grey hair fool you).  14,000?  Actually probably 12,000.  Maybe 11,500.  Where did all the members go?  As I looked into things, I was pointed to (thanks Les Servi!) Bowling Alone, which gave exhausting statistical evidence that social capital activities of all types (including professional society membership) were decreasing.   The importance of social capital and the need for societies to increase social capital opportunities became the theme of my presidency.  We did some good things during my year, and many of those have continued.

But INFORMS remains a 10,000-12,000 member society.  Financially, this is currently not much of an issue:  “membership” on the INFORMS books loses money. But the times, they are a’changing.  The main moneymaker for INFORMS is publications, with a very strong emphasis on academic library subscriptions.   INFORMS would be a financially healthy organization if all it did was publish Management Science.  But you don’t need to be a diviner to see that this is not a stable base.  Academic libaries are cutting budgets and alternative publication outlets are increasing in importance.  Even now, I need to stress to my colleagues from a computer science background that they (currently) need to publish in journals:  for them, conferences provide the primary outlet.

Even beyond the financials, having a strong membership is a good thing for our field.  While I was convinced by Bowling Alone that a decreasing membership is not the sign of the death of a field, not everyone buys that argument.  If operations research is as important as, say, economics, why are there 20,000 members of the American Economic Association but only 10,000 members of INFORMS?  (By the way, the AEA table gives a good picture of the issues every society is facing:  is economics really 20% less relevant now than it was in 2001, as given by the AEA membership numbers?).

So, to get to the crux, can INFORMS be a 20,000 (or 30,000 or 50,000) member society?  The US Bureau of Labor Statistics believes there are 58,000 OR analysts, and predicts this to increase to 65,000 in 2016.  I would guess that no more than 1,000 of these are members of INFORMS (I would not fall in this category, and I am pretty typical of INFORMS members).  Is this our market?  How would we get them?  Or are there people in our traditional group (Ph.D.s or students towards that degree, primarily in academia but many in practice or academics with a practice bent) that we should be aiming for?  Or perhaps retention is the issue:  we lose 20-30% per year (I believe), meaning we have to attract 2,000-3,000 new members per year just to stay even.

Or should INFORMS be happy decreasing to to 5,000 members, perhaps while still providing services to a larger group?  Would this be a bad outcome?

I’m on a few committees for INFORMS that look at these issues, but, now that my Board time is done, I don’t speak for INFORMS.  So I am interested in your views, loyal reader of MTORP:  What should INFORMS do?  The easy answer is to provide more at a lower cost.  That is going to be hard to do.

We can provide less at a lower cost: imagine a $30 membership where you get nothing more than a subscription to OR/MS Today (a fantastic magazine).  Everything else is a la carte.  You want to go to a conference:  no member discount (or perhaps you have to be a member, so you have saved $30);  you want a journal:  here’s the cost;  want a subdivision:  they all now charge real dues.  $30 gets you in the door:  everything else has a price tag.  Jim Orlin provided one vision of a lower cost membership.

Or perhaps we increase membership to $250 (it is currently $144).  We upgrade the website to create a true social network.  Everything becomes cheaper (for members!).  But we lose lots of members who don’t want to pay $250.

But I don’t want to provide too many possibilities:  I’d like your views.  What would you like INFORMS to do, and why?

Pittsburgh: Hotbed of Operations Research and Baseball

Pittsburgh is becoming the center of the universe when it comes to combining baseball with operations research.  First, there is … well, me! … a Professor of Operations Research whose company provides Major League Baseball with their player and umpire schedules.  And, beginning last year, Pittsburgh has had Ross Ohlendorf, who has converted his Princeton degree in Operations Research and Financial Engineering into 5 wins and a 4.82 ERA (this year) as a starting pitcher for the Pittsburgh Pirates.

Ross seems a serious OR guy.  He did his undergraduate thesis on the financial return of players from the draft.  Overall, his conclusion was that teams get a pretty good return from the money they put into top draft picks.  ESPN has a nice article on Ross’s OR side.

In his thesis, Ross looked at drafts from 1989-1993.  Some players offered tremendous return:

Ohlendorf determined that the average signing bonus during those years was $210,236, and the average return was $2,468,127. Here are the top 10 players from his study.

“So based on the assumptions I made in my paper, the A’s signing Giambi was the biggest winner in top-100 picks of the 1989 through 1993 drafts because he played extremely well in his first six years of major league service,” Ohlendorf said. “The White Sox did the best job in these drafts, with an internal rate of return of 217 percent. Their best signing was Frank Thomas.”

It is nice to see that Ross’s intelligence does not come at the expense of collegiality:

Ohlendorf is also a popular guy in the Pirates’ clubhouse. “He is so smart,” said Pirates shortstop Jack Wilson. “We give him a hard time about how smart he is, and he’ll come right back at us. We’ll say, ‘Ross, what is the percentage chance of this or that happening?’ and he’ll say, ‘The percentage chance of you winning that game of Pluck [a card game] is 65.678 percent, not 65.667 percent.”’

Starting pitcher might not be a standard job with an OR degree, but with a 2009 salary of $491,000, Ross may have found one of the more lucrative outcomes.

Ross:  if you read this, I’ll be the guy in the stands with a sign “Operations Researchers for Ohlendorf”!

Humanitarian Operations Research

Two and a half years ago, I spent a sabbatical year in New Zealand.  I had a great year, and very much enjoyed the vibrant research life at the University of Auckland, and the even more interesting life of living in New Zealand (you can check out my blog from the year, and perhaps especially some pictures from the house we lived in).  And the research was good, allowing me a chance to finish some things I was working on and to start some new things.

Despite the success of the year, I have had a nagging feeling that I could have done something more … useful in the year.  Does the world really need a slightly better soccer schedule?  Are my new thoughts on logical Benders’ approaches really important?

Before I left for New Zealand, I had been talking with some people from Bill Clinton’s foundation who worked on AIDS/HIV issues.  In the AIDS world, “operations research” has a different meaning than the meaning in my world.  In the AIDS world,  it means designing tests of alternative approaches and evaluating the results of those tests.   I would call that statistical experimental design.  But the Clinton people really understood what “real” operations research could provide:  more effective allocation of scarce resources.    We had some good discussions and I pointed them to people who knew far more about this area than I did.

It was only later that I thought:  “Maybe I should spend a sabbatical year looking at AIDS/HIV issues”.  Then, in discussions with people like Luk Van Wassenhove, I learned more about the work done in “Humanitarian Operations Research”.    I think next time I have an extended period away from teaching and administrative responsibilities, I will think about how I might make the world a better place through operations research.

Until then, let me do my little bit to help advertise that side of the field.  Three faculty members from Georgia Tech (Özlem Ergun, Pinar Keskinocak, and Julie Swann) are soliciting papers for a special issue of Interfaces on the topic “Humanitarian Applications: Doing Good with Good OR”.  If you are doing work that is having a positive effect on the world, you might consider submitting to the special issue.  From the call for papers:

This special issue focuses on humanitarian applications of operations research (OR) and management science (MS) models and methods in practice, or “Doing Good with Good OR.” Examples of research topics include planning and response to largescale disease outbreaks, such as pandemic influenza,
improved logistics for reaching earthquake victims, implementation of new energy-market structures to enable greater distribution, solutions for fair and sustainable water allocation, more accurate prediction of hurricane paths and devastation, prevention of terrorist attacks through algorithmic identification of perpetrators, and reduction of poverty through new market mechanisms. Appropriate papers include descriptions of practice and implementation of OR/MS in industry, government, nongovernmental organizations, and
education.

The due date for submissions is June 15.  I look forward to the issue very much.