Gurobi software now available for download

I am behind Erwin Kalvelagen, who writes an extremely useful blog where many challenging modeling problems are solved (this is one of my “must read” blogs), in announcing that Gurobi’s standalone software is now available.  I particularly like that the trial version is 500 variables and 500 constraints, which is large enough to see how the software really works, at least for MIP.  I had attended part of a software session at the recent INFORMS conference and left impressed both with the software and with Gurobi’s business plan.

Microsoft Solver Foundation: YAML?

Is the Microsoft Solver Foundation Yet Another Modeling Language? I have some views at the INFORMS Practice Conference Blog. Best part of the workshop: the tagline “The Right Decision”. Perhaps INFORMS should have used that instead of “The Science of Better”.

Microsoft Solver Foundation became public late in 2008, and I have been curious what it is all about, so I sat in on this morning’s technical workshop. Two hours and six pages of notes later, I think I have a better idea.  At its heart, Solver Foundation is a pure “.NET” based library for mathematical programming.   At some level, that is all the justification it needs to exist.  There are a lot of .NET shops out there, so being able to work purely within .NET is a real plus to them.  For those of us who are not working in such an environment, there are still a number of nice aspects to Solver Foundation (SF), including

  1. Modeling Breadth.  SF is based on a very general algebraic modeling structure (think Mathematica), so it is, in theory, not limited to “just” mixed-integer programming.  Current solver support includes constraint programming, MIP, quadratic programming, and nonlinear programming, but the underlying structure is extremely general.
  2. Built-in parallelism to take advantage of either networks of machines or multicore systems.  Since much of the improved speeds in computers these days comes from the addition of cores (even notebook computers can have four or more cores), it is critical that systems take advantage of this.  The example given in the talk was a “horse race” between alternative optimization codes: SF will easily let you run CPLEX, XPRESS, and other solvers in parallel on the same problem, terminating when the fastest solver on that instance terminates.
  3. Integration with Visual Studio and, particularly, Excel.  My (MBA) students really like to work within Excel, which has limited our use of modeling languages.  SF gives hope that we can embed real, scalable, models within Excel easily.
  4. Symbolic and rational arithmetic possibilities.  For some models, roundoff errors create huge problems.  SF solvers have the ability to work in rational arithmetic (keeping track of numerators and denominators) to provide exact calculations.

For me, the best parts are the ability to combine constraint programming with mixed-integer programming, and the hope that maybe I can teach some real operations research to my MBA students through SF’s links with Excel.   Of course, it is inspiring to hear a Microsoft person talk about the multi-billion dollar market they hope to reach through optimization.

My favorite part:  the tagline “The Right Decision”.  That pretty well sums up operations research.

Test Gurobi yourself!

In keeping with an unusually awesome collection of comments, Daniel Fylstra of Frontline Systems wrote regarding the new linear/integer programming software from Gurobi (apologies for the self-linking, but, hey!, it’s my blog!):

Although Gurobi Optimization plans to offer the Gurobi Solver directly (with its own APIs and tool set) starting in April, it’s available in a full commercial release this month (February) as embedded in several modeling systems (AIMMS, Frontline, GAMS, MPL). We have it available for download right now (15 day free trial with no other restrictions) at http://www.solver.com/gurobi. You can run the Mittleman benchmarks yourself, or solve your own models in MPS, LP and OSIL format using a little program GurobiEval that we include. You can create models from scratch, and solve them with Gurobi, either in Excel with our modeling system, Risk Solver Platform, or in C, C++, C#, VB6, VB.NET, Java or MATLAB using our Solver Platform SDK.

This is excellent news, and I look forward to experimenting with the new software.

If this level of comments keeps up, I’ll have to initiate a “quote of the week”!

What stops you from using Open Source in Operations Research?

In the discussion of CPLEX versus Gurobi software, the discussion took an interesting turn when it came to open source (like that of COIN-OR).  Sebastian, bemoaning the cost of commercial packages such as CPLEX said:

I think another long-term way to make OR feasible in situations where the cost of Cplex cannot be justified is to invest in the long term development of open source alternatives. The COIN-OR solvers are quite good already (for LP and general non-linear programs they are even competitive with state-of-the-art). Moreover the CPL license IBM chose is very liberal and allows the solvers to be tightly integrated into commercial products without further obligations.

If more companies would support the development as a long-term investment it could be in their best interest.

Shiva, an OR practitioner for a decade replied:

Sebastian: We did investigate open-source methods (including COIN), but after reading the fine print, legal departments tend to shoot it down with prejudice to avoid exposing the company to potential lawsuits down the road. It was another frustrating experience, since like all O.R folks, I love what COIN’s been doing.

Larry, of IEOR Tools, replied:

Shiva, Great points on barrier to entry. I too have been caught in that world of justifying an IT expense for the sake of desicion modeling. For instance, one kind find a decision analysis by just using a spreadsheet. Is it optimal? No. Does it offer a solution? Yes. The key is to justify the long term gains of optimal decision processes. I believe that is the trick.

Yet I have definitely found a great niche and I think the OR community can great help with this issue. Open Source software has the added benefit of community support, development, and implementation without the cost structures. It’s something I have started talking about on my blog. I believe it can have profound affects on the OR community.

I repeat all this because this is one of the best exchanges we have had on MTORB.  I am part of COIN-OR (a member of the “Strategic Leadership Board”, though I feel at sea at this compared with my more experienced colleagues).  And it does interest me why more people don’t use COIN-OR.  For me, there are issues with documentation:  I don’t have a lot of time when I learn a new tool, and much open source stuff doesn’t have documentation and examples at the level of commercial software.  I spent a summer working with Symphony, now at COIN-OR, and it was great fun.  I even put together a document on my experience that was part of the distribution at one point.  But I am not sure how many summers I can spend on working through new software.  So I use COIN-OR software but not as much as I should.

I am also interested in why people do use COIN-OR and other open source tools.  I very much welcome thoughts on this issue, and thank Sebastian, Shiva, and Larry for getting this going.

Gurobi versus CPLEX benchmarks

Hans Mittelmann has released some benchmark results comparing CPLEX 11.2 with the first version of Gurobi‘s code (1.02) in both sequential and parallel (4 processor) mode.  Mosek’s sequential code is also included in the test.  Let me highlight some of the lines:

=================================================================
s problem     CPLEX1    GUROBI1     MOSEK1     CPLEX4    GUROBI4
-----------------------------------------------------------------
    air04          9         18         49          7         13
   mzzv11         89        116        434         80        116
      lrn         42        104          f         36        996
ns1671066       1828          1       2474         26          1
ns1688347       1531        315          f        716        258
ns1692855       1674        322          f        688        234
     acc5         25        247       4621         30         33

These are not chosen to be typical, so be sure to look at the full list of benchmark results.  Concentrating on the first two columns, Gurobi’s code seems to be holding its own against CPLEX, and sometimes has a surprising result.  The ns1671066 result certainly looks like Gurobi is doing something interesting relative to CPLEX.

It is also striking how often the speedup is more than a factor of 4.  That same ns1671066 instances that CPLEX had such trouble on sequentially certainly looks much more solvable with 4 threads.  But sometimes there is a significant slowdown (see Gurobi on lrn).  Perhaps someone more skilled in this area can explain how we should interpret these benchmarks.

The official release for Gurobi is still two months away:  it will be interesting to see how the released version works.

Bugs and Modeling

The web was all abuzz on December 31 as the 30Gig version of the Microsoft Zune players all stopped working.  What was up?  Was it a terrorist attack?  Solar flares?  A weird Y2K bug almost a decade later?

The truth is a bit prosaic:  there was simply a bug related to leap years.  Since the Zune was not around four years ago, 2008 was the first time for the bug to show itself.  There are descriptions of the bug in numerous places:  here is one if you haven’t seen it yet.  Bottom line:  a section of code for converting “days since January 1, 1980” (when the universe was created) to years, months, and days didn’t correctly handle a leap year, leading to an infinite loop.

It is easy to laugh at such a mistake:  why didn’t a code review or unit testing catch such a simple mistake?  But, it seems, such “simple” parts of the code seem the ones most likely not to get tested.  When you have to test all sorts of complicated things like checking authorization, playing music, handling the interface and so on, who expects problems in a date calculation?  And hence a zillion Zunes fail for a day.

Ooops!
Ooops!

I experienced something similar when I was reviewing some code I use to create a sports schedule.  Never mind the sport:  it doesn’t matter.  But the model I created aimed to have a large number of a particular type game on a particular week.  And, for the last few years, we didn’t get a lot of those games on that week.  This didn’t particularly worry me:  there are a lot of constraints that interact in complicated ways, so I assumed the optimization was right in claiming the best number was the one we got (and this was one of the less important parts of the objective).  But when I looked at the code recently, I realized there was an “off by one” error in my logic, and sure enough the previous week had a full slate of the preferred games.  Right optimization, wrong week.  Dang!

So one of my goals this week, before class starts is to relook at the code with fresh eyes and see what other errors I can find.    There are some things I can do to help find such bugs, like trying it on small instances and turning on and off various constraint types, but one difficult aspect of optimization is that knowing the optimal solution requires … optimization, making it very hard to find these sorts of bugs.

How far does Santa Travel?

As we all know, Santa Claus has to visit every (good) little girl and boy all in one night.  Fortunately, due to the time zones, he has about 24 hours to do so.  Now the earth is about 25,000 miles around, so he could zip around the earth in that distance (as an aside, I just traveled from the US east to Singapore, then returned in a westerly direction, which is not quite 25,000 miles but is close), but that wouldn’t let him visit very much.  How much farther does he have to travel to see everyone?

Bill Cook and his coauthors at Georgia Tech have a traveling salesman tour instance that consists of 1,904,711populated sites.   Finding good TSP tours of this is definitely a non-trivial task, but people have been working at it:

The best reported tour for the World TSP was found by by Keld Helsgaun using a variant of his LKH heuristic algorithm. His tour of length 7,515,947,511 was found on November 27, 2008. Previous best known tours were of length 7,517,285,610 also by Helsgaun (September 16, 2003) and of length 7,518,425,642 by Hung Dinh Nguyen, Ikuo Yoshihara, Kunihito Yamamori, and Moritoshi Yasunaga (June 2, 2003), using a combination of iterated Lin-Kernighan and a parallel hybrid genetic algorithm.

The distances given are in meters, so the best tour is now at 4,670,193.27 miles.  This tour is not (necessarily) optimal, so perhaps Santa knows a better tour of these points, but the lower bound is currently 7,512,218,268 meters (or 4,667,876.02 miles), so he can’t improve much. I was surprised that Santa had to do the equivalent of going around the world at least 187 times or so, but that’s what it takes to visit all those points. Santa’s tour is further restricted since he has to worry about time zones: it wouldn’t do for him to arrive at a town before the kids are asleep.

So those nine (or ten, if you count Olive, the Other Reindeer) reindeers move along at about 195,000 miles/hour (at least:  you might want to add time for the dropping off of presents, eating cookies, drinking milk, and so on).  They really move!

 

Need for Parallelism

The operations research world is crying out for better parallel approaches to our problems. Note that for $10,000, you can buy a NVIDIA Tesla Personal Supercomputer, with speeds up to 4 teraflops (which, unless I am reading the graph wrong, easily puts it in the top 500 supercomputers at Top 500). The catch? You get the speed through hundreds of processors (processors designed for doing graphics calculations, but easily programmed for other tasks). I can see it now: “Hello ILOG? I’d like 960 licenses for parallel CPLEX! … That will be how much??” If one key part of our field is linear and integer programming, we really need to get on the bandwagon here and show how our field can take advantage of machines like this.

Of course, there is work being done, particularly by the open source wizards associated with COIN-OR, but this looks like an incredibly rich and important research area. What kind of speed up can we get with 960 processors? 500? 100? 2? Has anyone done any OR on this sort of machine?

Knuth and Multicore Systems

Donald Knuth has been publishing “fascicles” from his Volume 4 (Combinatorial Algorithms) of his epic The Art of Computer Programming. These are shortish (100-150 page) sub-chapters of a work on an area that expands faster than Don can write. You can download some of the preliminary versions on Knuth’s page to get the flavor.

Knuth was interviewed by Andrew Binstock of InformIT. Great interview with very provocative questions and answers! I particularly liked the exchange on parallel algorithms and multicore systems. Don is not a fan:

Andrew: One of the emerging problems for developers, especially client-side developers, is changing their thinking to write programs in terms of threads. This concern, driven by the advent of inexpensive multicore PCs, surely will require that many algorithms be recast for multithreading, or at least to be thread-safe. So far, much of the work you’ve published for Volume 4 of The Art of Computer Programming (TAOCP) doesn’t seem to touch on this dimension. Do you expect to enter into problems of concurrency and parallel programming in upcoming work, especially since it would seem to be a natural fit with the combinatorial topics you’re currently working on?

Donald: The field of combinatorial algorithms is so vast that I’ll be lucky to pack its sequential aspects into three or four physical volumes, and I don’t think the sequential methods are ever going to be unimportant. Conversely, the half-life of parallel techniques is very short, because hardware changes rapidly and each new machine needs a somewhat different approach. So I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity.

Andrew: Vendors of multicore processors have expressed frustration at the difficulty of moving developers to this model. As a former professor, what thoughts do you have on this transition and how to make it happen? Is it a question of proper tools, such as better native support for concurrency in languages, or of execution frameworks? Or are there other solutions?

Donald: I don’t want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks!

I have been struggling to take advantage of my (now not so-)new computer and its 8 cores. Since about 90% of my used CPU cycles are done in CPLEX, and I only have a single-core license, I am actually running at about 10% capacity on that machine. And my efforts to write some specialized codes have not been successful, though that is perhaps more due to lack of time and effort than any inherent difficulty. Does the new generation of OR researchers understand and feel comfortable with multi-core programming? Or are we now just going to stall in terms of computation speed in practice?

ILOG Optimization Decision Manager Hands On

I sat through (OK, half of) the ILOG workshop on their Optimization Decision Manager.  The ILOG ODM can be seen as a front-end to the rest of the ILOG Optimization systems (like OPL).  I would think of it as a super-sized way of doing version control and what-if analysis.  I use ILOG software in a lot of my research (and consulting) and I generally do something like

  1. Optimize a system
  2. Change some of the constraints and data
  3. Not like the results, and
  4. What was 1. again?

Rather than do intermediate saves (hmmm… I wonder what “testjunk.mod” is?), ODM allows you to save scenarios with differing data, models, parameters, goals and so on and then compare data and results between models.  This makes it much easier to mess around with instances and keep track of what works and what doesn’t work.   Embedded within in ODM is the opportunity to make constraints “soft” (putting them in the objective, with differing weights) and to do goal programming.

I am not quite certain the goal user for this.  I like the idea of using this in my own work, where I am the user.  I shudder about giving this system to someone without some reasonable understanding of operations research:  the “end user” here had better be pretty sophisticated.

Overall, I am glad that I attended the half of the session I did.  I do think we need better hardware/software/display for demos:  squinting at what appeared to be 4 point font at a screen was not particularly illuminating, and the resulting headache chased me from the room halfway through.