New Blogs and Welcome Graham!

On my sidebar, I try to keep track of all the operations research oriented blogs. There are still few enough that I think I can keep a complete list (even allowing for a pretty broad view of operations research). The advantage of being on the list is that new posts on each of those blogs show up on my “From the OR Blogs”. Further, many of the posts are fodder for my twitter stream, which reaches literally dozens of people! So, if you are posting in the blogORsphere, and I don’t list you, please let me know: I am not meaning to ignore you (though if you don’t post for 2 months, you go onto my “inactive” list, so keep the posts coming).

On that note, let me welcome Graham Kendall, who has begun Research Reflections. Graham runs the MISTA conference series that will take place next in Dublin in August. Graham is a good friend of mine, even if he did dump me during a conference, forcing me to listen to a very boring lecture on art when I could have been enjoying a pint with him in a congenial pub (there were extenuating circumstances: my attention wandered during the critical “let’s get the heck out of here” moment). So I have forgiven him that, and recommend to you both his blog and the MISTA conference (of which I am part of the advisory committee, so I have some biases here).

And please check out all of the OR Blogs, and the “From the OR Blogs” in the sidebar (both of which appear if you go to the main page of this blog). There is a lot of great stuff out there.

Conference Proceedings are Not Enough

In much of operations research, a conference is simply an opportunity to give a talk on recent research.  At INFORMS, EURO, IFORS and many other conferences, there are no printed proceedings, and no real record of what was presented in a talk.  While giving a talk is useful, it doesn’t really count for much in most promotion and tenure cases.  If you want to continue in academic OR, you need to publish papers, generally in the “best” journals possible.

However, in some parts of OR, particularly those parts that overlap with CS, conference presentations are much more competitive and prestigious.  In my own area, conferences such as CP, CPAI-OR, PATAT, MISTA, INFORMS-Computing and a few others are competitive to present at.  A full (15 page or so) or short (5 page) paper must be submitted, and these are reviewed (with varying amounts of rigor).  Acceptance rates can range as low as 20%, and are rarely above 40%.   The papers are then published either in a book on their own or in a series such as Lecture Notes in Computer Science.   These do “count” towards promotion and tenure, and researchers who can consistently get accepted at these conferences are very well thought of.

This has led, however, to some researchers (and entire swathes of some subfields) simply not publishing in archival journals.  I have seen resumes from some very good researchers that have essentially no journal papers.  I can understand the reasons:  journal publishing is a slow and frustrating process (and I am part of that problem, though I am getting better at refereeing and editorial roles!).  Further, since journals will typically not publish verbatim versions of papers published at conferences, new things must be added.  It is unappealing to go back to the topic just to leap over a journal publication barrier.

But I think it is necessary to publish in journals where the refereeing is generally more thorough and the page limits are such that topics can be thoroughly explored.  Samir Khuller at the Computational Complexity blog has made a similar argument (thanks to Sebastian Pokutta for the pointer):

Its very frustrating when when you are reading a paper and details are omitted or missing. Worse still, sometimes claims are made with no proof, or even proofs that are incorrect. Are we not concerned about correctness of results any more? The reviewing process may not be perfect, but at least its one way to have the work scrutinized carefully.

Panos Ipeirotis, whose blog is titled “A Computer Scientist in a Business School” has objected to this emphasis on journal papers:

Every year, after the Spring semester, we receive a report with our annual evaluation, together with feedback and advice for career improvement (some written, some verbal). Part of the feedback that I received this year:

  1. You get too many best paper awards, and you do not have that many journal papers. You may want to write more journal papers instead of spending so much time polishing the conference papers that you send out.
  2. You are a member of too many program committees. You may consider reviewing less and write more journal papers instead.
I guess that having a Stakhanovist research profile (see the corresponding ACM articles) is a virtue after all.

Panos also has an interesting proposal to get rid of acceptance/rejection completely.

I have mixed feelings on this.  On one hand, conferences work much more efficiently and effectively at getting stuff out (there is nothing like a deadline to force action).  On the other hand, having watched this process for both conferences and journals, I am much more confident in stuff published in journals (by no means 100% confident, but more confident).  Too many conference papers dispense with proofs (and have, in fact, incorrect results) for me to be happy when only conference papers are published.

Finally, in a business school at least, but I believe also in industrial engineering, promotion and tenure cases need to be made outside the field to people who are still overwhelmingly journal oriented.  I would rather spend my time explaining a paper and saying why it is great than justifying the lack of journal publications as a field-specific phenomenon that should not be held against the candidate.

So publish that journal paper!

Computational Sustainability

Carla Gomes from Cornell visited here a few weeks ago.  I have known Carla for a decade or so, and she has been one of the people who I have found very useful to talk to when trying to figure out the world of constraint programming.

Institute for Computational SustainabilityCarla gets involved in lots of things.  She (along with others, of course, though she is the Lead PI) received a huge grant from NSF to form an Institute for Computational Sustainability, which I think is a wonderful term for an interesting topic.   The vision statement for this activity is quite provocative:

Computer scientists can — and should — play a key role in increasing the efficiency and effectiveness of the way we manage and allocate our natural resources, while enriching and transforming Computer Science.

Now, I consider Carla to be in Operations Research, even though she calls herself a computer scientist, and the topics that the Institute address have a strong operations research feel. One example: how can you most effectively create wildlife corridors to connect populations of endangered animals? If that is not an OR problem, I don’t know what is!

The Institute is holding its first conference in a few weeks, with registration open until May 22. The conference has an extremely broad group of speakers and I suspect most of the conference will be spent trying to understand each others terminology, skills, and interests. Looks like it will be a fun week!

Further workshops at INFORMS Practice

Over at the INFORMS Practice Conference Blog, I have entries on Gurobi and ILOG, an IBM Company. Both presentations were inspiring in their own ways.

Gurobi Post:

It goes without saying that these statements are my individual views of the workshops, and are not the official word from either the companies or INFORMS.

The world of optimization software has been turned upside down in the last year.  Dash Optimization (makers of XPRESS-MP) was bought by FairIsaac (or FICO, as it is now called). ILOG, makers of CPLEX, was bought by IBM.  And three key people from ILOG, Gu Rothberg and Bixby, split off to form Gurobi (no prizes for guessing how the name was formed).  Gurobi held its first (I believe) technical workshop at an INFORMS Practice Conference, and had tons of interesting news.  Since “Operation Clone Michael Trick so He Can Attend All Interesting Workshops” failed, I spent the first half hour of the 3pm workshop session at the Gurobi sesssion before moving onto another session.  Here are a few things presented.

Bob Bixby presented an overview of the history of Gurobi.  Their main goal over the last year has been to create a top-notch linear and mixed integer programming code.  I was surprised that they were able to do this in the March 2008-November 2008 period.  Since then, the optimization code has been essentially static while the firm works on things like documentation, bug fixes, user interfaces and so on.

The business model of Gurobi has three main parts:

  1. Focus on math programming solvers
  2. Flexible partnerships
  3. Technology leadership

The partnership aspect was quite interesting.  They very much value the relationship they have with Microsoft Solver Foundation (whose presentation I attended this morning), along with the partnerships they have with AIMMS, Frontline, GAMS, Maximal, and other groups.

Ed Rothberg presented the stand-alone user interface (to be released May 6), which has been implemented as a customization of the Python shell.  Some of my colleagues (in particularly those at the University of Auckland) have been pushing Python, but this is the first full scale system I have seen, and it is very impressive.

Beyond that, I can only go by the handouts, since I did some session jumping, but a few things are clear:

  1. As an optimization code, Gurobi is competitive with the best codes out there, being better than all on some instances, and worse than some on others.
  2. Gurobi is taking parallel optimization very seriously, stating that single-core optimization is nothing but a special case of its multi-core approach.
  3. Python is a powerful way of accessing more complicated features of the system.

Gurobi is already available as an add-in to other systems.  It will be available in a stand-alone system in a week or so. Further versions are planned to come out at six month intervals.

CPLEX/IBM Post:

Continuing my coverage of a few of the Technical Workshops, I reiterate that the views here are neither those of the companies nor of INFORMS.  They are mine!

Ducking out of one technical workshop, I moved on to the presentation by ILOG (now styled ILOG, an IBM Company, since IBM’s acquisition earlier this year).  It was great to see the mix of IBMers and ILOG people on the stage.  Like many (about 2/3 according to a later audience survey), I was worried about the effect of having IBM acquire ILOG, but the unity of the group on stage allayed many of those fears.  The workshop had two major focuses:  the business strategy of having IBM together with ILOG and, more technically, details on the new version of ILOG’s CPLEX, CPLEX 12.

When it comes to business strategy, IBMers Brenda Dietrich and Gary Cross put out a persuasive and inspiring story on how IBM is focusing on Business Analytics and Optimization.  How can you make an enterprise “intelligent”?  You can make it aware of the environment, linked internally and externally, anticipating future situations, and so on.  And that requires both data (as in business intelligence) and improved decision making (aka operations research).  As IBM tries to lead in this area, they see the strengths in research meshing well with their consulting activities and with their software/product acquisitions.  The presentation really was inspiring, and harkened back to the glory days of “e-business” circa 1995 with an operations research tilt (with the hopes of not having a corresponding bust a few years later).

When it comes to CPLEX 12.0, there continues to be improvements.  These were given in three areas:

  1. improved MIP performance.
  2. parallel processing under the standard licence.
  3. built-in connectors for Excel, Python, and Matlab.

The improved performance was characterized by two numbers.  For instances taking more than a second to solve, the improvement was about 30%;  for harder problems taking more than 1000 seconds, CPLEX 12 is about twice as fast as 11.2 (on the problems in the extensive testbed).  Strikingly, the CPLEX testbed still has 971 models that take at least 10,000 seconds to solve, so there is still lots of work to be done here.  The improvements came through some new cuts (multicommodity flow cuts) as well as general software engineering improvements.

I think the news on parallel (multicore) processing is particularly exciting.  If our field is to take advantage of modern multi-core systems, we can’t have our software systems charging us per core.  There are some issues to be handled:  the company doesn’t want people solving 30,000+ separate models on a cloud system simultaneously for the price of one license, but some system for exploiting the 2-8 cores on most current machines must be found.  I am really pleased that this will be available standard.

I was also very happy to see the Excel add-in.  As an academic, I know that my (MBA)  students are most comfortable working within Excel, and I will be very happy to introduce them to top-notch optimization in that environment (once ILOG figures out its pricing, which was unclear in the presentation).

Overall, I found this an inspiring workshop on both the business strategy and the technical sides.  IBM should also be recognized for bringing in a clicker system to get audience feedback:  that made for a very entertaining and useful audience feedback session.

One final point: IBM claims to have 800 “OR Experts”, which is a pretty good number.  If all of them became members of INFORMS, we would gain about 650 members, by my calculation.

Microsoft Solver Foundation: YAML?

Is the Microsoft Solver Foundation Yet Another Modeling Language? I have some views at the INFORMS Practice Conference Blog. Best part of the workshop: the tagline “The Right Decision”. Perhaps INFORMS should have used that instead of “The Science of Better”.

Microsoft Solver Foundation became public late in 2008, and I have been curious what it is all about, so I sat in on this morning’s technical workshop. Two hours and six pages of notes later, I think I have a better idea.  At its heart, Solver Foundation is a pure “.NET” based library for mathematical programming.   At some level, that is all the justification it needs to exist.  There are a lot of .NET shops out there, so being able to work purely within .NET is a real plus to them.  For those of us who are not working in such an environment, there are still a number of nice aspects to Solver Foundation (SF), including

  1. Modeling Breadth.  SF is based on a very general algebraic modeling structure (think Mathematica), so it is, in theory, not limited to “just” mixed-integer programming.  Current solver support includes constraint programming, MIP, quadratic programming, and nonlinear programming, but the underlying structure is extremely general.
  2. Built-in parallelism to take advantage of either networks of machines or multicore systems.  Since much of the improved speeds in computers these days comes from the addition of cores (even notebook computers can have four or more cores), it is critical that systems take advantage of this.  The example given in the talk was a “horse race” between alternative optimization codes: SF will easily let you run CPLEX, XPRESS, and other solvers in parallel on the same problem, terminating when the fastest solver on that instance terminates.
  3. Integration with Visual Studio and, particularly, Excel.  My (MBA) students really like to work within Excel, which has limited our use of modeling languages.  SF gives hope that we can embed real, scalable, models within Excel easily.
  4. Symbolic and rational arithmetic possibilities.  For some models, roundoff errors create huge problems.  SF solvers have the ability to work in rational arithmetic (keeping track of numerators and denominators) to provide exact calculations.

For me, the best parts are the ability to combine constraint programming with mixed-integer programming, and the hope that maybe I can teach some real operations research to my MBA students through SF’s links with Excel.   Of course, it is inspiring to hear a Microsoft person talk about the multi-billion dollar market they hope to reach through optimization.

My favorite part:  the tagline “The Right Decision”.  That pretty well sums up operations research.

Baseball and Operations Research

Blogged at the INFORMS Practice site on how to make a trip to a baseball game a legitimate business expense.

I just arrived in Phoenix, and I’m off to this evening’s game between the Giants and the Diamondbacks.  There is an operations research connection, of course:  both the teams and the umpires are scheduled with operations research.  So this is kinda like a site visit:  I’m there to be sure exactly two teams show up, along with four umpires!

More serious posts tomorrow when I attend some of the Technology Workshops.

Blogging for the INFORMS Practice Meeting

I am one of a stable of guest bloggers for the INFORMS Practice Meeting. Rather than double post, I’ll move over to that blog for a few days (unless I have something to say that isn’t appropriate for an INFORMS blog), with pointers from here.

My first entry there: Tough Choices!, where I complain about an embarrassment of riches.

I’m getting organized to head off to the INFORMS Practice Conference.  I fly out Saturday, and will meet up with some friends to go to the Diamondbacks game.  Sunday is shaping up to be a very full day.  I really enjoy the Technology Workshops, where software companies in operations research talk about their recent products and plans for the future.  This year is shaping up to be particularly interesting due to all the activity in the market.  Since last year’s meeting,

  1. Dash Optimization (makers of Xpress-MP) has been bought by FairIsaac, which is now named FICO
  2. ILOG has been bought by IBM, and is now styled “ILOG, an IBM Company”
  3. Gurobi has been founded by some of the people who used to be with ILOG
  4. Microsoft Solver Foundation has started
  5. Dynadec has been formed to market Comet, a hybrid optimization, constraint programming, local search solver

and undoubtedly much more that I missed along the way, but will find out at the conference.  All of these companies and many others and more will be presenting “half day” workshops on Sunday:  they are really three hour workshops so you can get in three of them during a very long day.  The hard part is trying to figure out which three of the twelve workshops to attend!

INFORMS Practice Conference goes Web 2.0 Crazy!

The upcoming INFORMS Practice Conference has embraced new social networking technologies as no INFORMS conference has ever done. You have your choice of

  1. Blogging. A conference blog with a half dozen guest bloggers (including yours truly).
  2. Twittering. Just use the #ipc2009 tag
  3. LinkedIn. I’m not sure the value of a LinkedIn group, but I want to be part of the gang!

Let’s see, what’s left? Facebook? Club Penguin?

This makes a great experiment in what helps people best engage with a conference.

I’m ready for my close-up Mr. DeMille, the Operations Research Version

If all goes according to plan, the members of INFORMS will receive an email over the next two days.  The email outlines some reasons why you should attend the upcoming INFORMS Practice Meeting (note that you need to register by April 1 in order to get a discount on the registration fee).  Part of the email is a video featuring … me!  In my two minute schtick, I try to give you some reasons why I like the INFORMS Practice conference so much.

I found the video really hard to do.  I vacillated between spontaneous and rigid.  When spontaneous, I had enough verbal tics that it was unwatchable.  “I, um, really like the INFORMS Practice Conference, you know, um, because, um…”  Arghh!  The other extreme made me look as though madmen had captured my loved ones and were forcing me to to read their manifesto against my will.  So I tried to split the difference in the final video.  Perhaps now it looks like I am being forced to read the manifesto with a verbal tic.  As my wife said “It was fine, but you are no actor”.  Despite that, you really should think about attending the INFORMS Practice Conference:  it is inspiring to see what our field does in the real world.

If you can’t wait for the email, you can check it out here.