Gurobi versus CPLEX benchmarks

Hans Mittelmann has released some benchmark results comparing CPLEX 11.2 with the first version of Gurobi‘s code (1.02) in both sequential and parallel (4 processor) mode.  Mosek’s sequential code is also included in the test.  Let me highlight some of the lines:

=================================================================
s problem     CPLEX1    GUROBI1     MOSEK1     CPLEX4    GUROBI4
-----------------------------------------------------------------
    air04          9         18         49          7         13
   mzzv11         89        116        434         80        116
      lrn         42        104          f         36        996
ns1671066       1828          1       2474         26          1
ns1688347       1531        315          f        716        258
ns1692855       1674        322          f        688        234
     acc5         25        247       4621         30         33

These are not chosen to be typical, so be sure to look at the full list of benchmark results.  Concentrating on the first two columns, Gurobi’s code seems to be holding its own against CPLEX, and sometimes has a surprising result.  The ns1671066 result certainly looks like Gurobi is doing something interesting relative to CPLEX.

It is also striking how often the speedup is more than a factor of 4.  That same ns1671066 instances that CPLEX had such trouble on sequentially certainly looks much more solvable with 4 threads.  But sometimes there is a significant slowdown (see Gurobi on lrn).  Perhaps someone more skilled in this area can explain how we should interpret these benchmarks.

The official release for Gurobi is still two months away:  it will be interesting to see how the released version works.

15 thoughts on “Gurobi versus CPLEX benchmarks”

  1. Tree search speedups can often be superlinear, at least on small numbers of processors. The reason is that searches are not simply distributing the same work over more processors. Results generated by one processor may affect the amount of work other processors need to do.

    The search is dynamic and driven by global information that can be updated from various nodes of the tree. The update of the global upper bound (when minimizing) affects what nodes are enumerated elsewhere in the tree. If a particular upper bound value is generated by processor 4, that can short-circuit the evaluation of nodes by processor 1. If the serial code enumerates the subtree assigned to processor 1 first, that key update comes too late to save any work. Similar effects can occur for cutting planes.

    On the flip side, the rate at which nodes are processed can mean that the total amount of work done by all processors is larger than for the serial code, even though the elapsed time is lower.

    The interesting number above is Gurobi’s performance on lrn. There, the total elapsed time went up on four processors by an order of magnitude. In that case, it seems that the search tree generated by the parallel code is different enough that it misses some crucial solution or cut found early in the serial search.

  2. Apparently superlinear speedup is rather common for algorithms involving a lot of search, simply because the search strategy (the traversal order) is not the same as in the sequential case. You’re not actually running precisely the same algorithm, and with some luck you can reach a solution much faster. But of course you may also be unlucky. Parallel computing is by essence non-deterministic, which makes timing comparisons difficult and sometimes even pointless.

    A way to achieve more deterministic timings is to use speculative parallelism: run many instances of the same algorithm in parallel with different search strategies, take the first result and discard all others. This may be an option in the future if 1000+ cores processors become commonplace (that’s 8 years from now considering a doubling of the number of cores every year).

  3. As a serious OR practitioner for a decade, I’ve seen some awesome math programming models built that scale well and produce robust answers to real-life business problems, etc, but remain buried and we are forced to go with (lousy) heuristics because of ILOG”s relatively prohibitive pricing structure (this is my personal opinion and some may say disagree).

    In the past, I have (sadly) even been asked to investigate if tools like CPLEX can be removed from existing products if we have in-house approaches that are half as good.

    CPLEX is fantastic, and i have the deepest regard for its inventors and developers. There are 4-5 new O.R projects in my purview that would have happily used CPLEX and done some real-world O.R magic (and I think generated enough revenue and goodwill for ILOG), but if only the IT managers could justify the cost. It is tough to do that in many situations since products like CPLEX brings so many intangible benefits that cant be quantified to any non-OR person’s satisfaction. So every internal presentation i make I show “what could have been” and “here’s the crappy stuff we do”.

    We are shooting ourselves in the foot here. I would love to see CPLEX-like products make tons of money, but at the same time, their steep cost structure is a huge barrier to their entry and they spell “doom” for an O.R practitioner like me (“we dont need an OR phd to build some randomized heuristic of unknown solution quality, and your math programming approaches cant be implemented”). In retrospect, my first 3 years of my job would have been better spent reinventing a sparse-matrix LP solver 🙂

    If we want optimization to move beyond the traditional O.R-heavy industries in a big way, somebody needs to come up with a better business model, so all parties benefit from this in the long term.

    If Gurobi is even 90% as good as CPLEX and they have a smarter cost structure, I expect them to go into lots of new optimization-based products that never see the light of day because of the reasons mentioned above.

    here’s wishing Gurobi all the best! There are some O.R practitioners pulling for you.

  4. Shiva, (I am still in academia and not facing the uphill battle of justifying software expenses to customers or managers, so take my comments with a grain of salt.)

    I think another long-term way to make OR feasible in situations where the cost of Cplex cannot be justified is to invest in the long term development of open source alternatives. The COIN-OR solvers are quite good already (for LP and general non-linear programs they are even competitive with state-of-the-art). Moreover the CPL license IBM chose is very liberal and allows the solvers to be tightly integrated into commercial products without further obligations.

    If more companies would support the development as a long-term investment it could be in their best interest.

  5. Sebastian: We did investigate open-source methods (including COIN), but after reading the fine print, legal departments tend to shoot it down with prejudice to avoid exposing the company to potential lawsuits down the road. It was another frustrating experience, since like all O.R folks, I love what COIN’s been doing.

  6. Shiva, Great points on barrier to entry. I too have been caught in that world of justifying an IT expense for the sake of desicion modeling. For instance, one kind find a decision analysis by just using a spreadsheet. Is it optimal? No. Does it offer a solution? Yes. The key is to justify the long term gains of optimal decision processes. I believe that is the trick.

    Yet I have definitely found a great niche and I think the OR community can great help with this issue. Open Source software has the added benefit of community support, development, and implementation without the cost structures. It’s something I have started talking about on my blog. I believe it can have profound affects on the OR community.

  7. Although Gurobi Optimization plans to offer the Gurobi Solver directly (with its own APIs and tool set) starting in April, it’s available in a full commercial release this month (February) as embedded in several modeling systems (AIMMS, Frontline, GAMS, MPL). We have it available for download right now (15 day free trial with no other restrictions) at http://www.solver.com/gurobi. You can run the Mittleman benchmarks yourself, or solve your own models in MPS, LP and OSIL format using a little program GurobiEval that we include. You can create models from scratch, and solve them with Gurobi, either in Excel with our modeling system, Risk Solver Platform, or in C, C++, C#, VB6, VB.NET, Java or MATLAB using our Solver Platform SDK.

    About Gurobi’s performance: We’ve been running it for months, and some of our tough customer models form part of Gurobi’s internal test set. The benchmark results on the Mittleman test problems are clearly very good for Gurobi (take the geometric mean of the ratio of solve times across all the problems for a fair comparison). But the Mittleman results for 4-core processors tend to understate Gurobi’s typical performance on the customer models we’ve seen. I understand from Gurobi that this is due to many factors, including the fact that Gurobi can solve a number of these problems at the root node via cuts without ever branching — and of course its parallelism comes fully into play only when it does start branching. Our experience with our own customer models suggests that Gurobi’s speed advantage vs. CPLEX and other codes is often greater on multi-core processors than it is on single-core processors.

  8. Shiva, Great points on barrier to entry. I too have been caught in that world of justifying an IT expense for the sake of desicion modeling. For instance, one kind find a decision analysis by just using a spreadsheet. Is it optimal? No. Does it offer a solution? Yes. The key is to justify the long term gains of optimal decision processes. I believe that is the trick.
    Anyways thanks

Leave a Reply

Your email address will not be published. Required fields are marked *