Summary of Quantum Computing

Scott Aaronson, whose writings I both admire and am jealous of, has an article in the month’s Scientific American on the limits of quantum computing.  He has posted a preliminary version of the paper on his site.  I found this extremely useful in trying to make sense of what quantum computing can and can’t do.  It is a shame the writers at SlashDot don’t read the paper before making the comments showing their confusions!

New CPLEX out

I got an email last week from a PR firm asking whether I would be willing to talk to some ILOG people about the upcoming release of CPLEX 11. Made me feel like a real journalist! Since I am unfortunately missing the Seattle INFORMS (the 12 hour flights from Auckland to LA are killing!), I was going to miss my normal chats with Irv Lustig and the gang, so I did talk with them a few days ago by phone.

You can read the press release about CPLEX 11. Since I have a few sports scheduling problems that are just not solving under CPLEX 10 (or anything else I have tried), I am looking forward to seeing if things are better under 11.

There were three things that struck me when talking to the CPLEX folks:

  1. Pretty well all of the improvements are in the integer programming parts, not the linear programming parts. This is in keeping with Bob Bixby’s talk at EURO (you can see a version of the talk in Seattle, too) where he said that LP improvements were enormous from 1996-2003 but not much has happened in the last few years.
  2. The improvements in mixed integer programming in this version are coming from improved search. This contrasts with previous improvements that came from better cuts. So, when CPLEX went from version 6 to 6.5, there was a huge improvement in speed due to the addition of Gomory cuts. I would guess that in the academic world, search has received about 1% of the attention that cuts have gotten. There is a well defined theory of facets, cut generation, and so on, and very little general insight into search. CPLEX 11 uses something ILOG calls “dynamic search”. If you use dynamic search, you have to give up all the callbacks that are normally used to provide user-defined search, so you are accepting the ILOG “black box” on search. This should lead to some pretty interesting testing: can problem-dependent search do better than the general search approach? The constraint programmers have always had search as a key component to their systems; it is about time integer programmers spent more time thinking about it. Is it possible to put search on the same solid foundation that cut generation is? Or will search end up being just a bunch of heuristics that happen to work well on many instances?
  3. Parallel is getting more attention (and parallel linear programming may be the step that makes LP faster). But it is clear at this point that having 8 cores (as I do!) won’t result in 8 times speedup. First, the root node isn’t parallel, so anything with an expensive root node won’t see much improvement at all. Later, once the branch-and-bound search tree is in full swing, the amount of memory access needed is large, so the bottleneck occurs in the pipe to the memory. I got the impression that this is something they are working on, which is good: lots of the upcoming speed improvements are going to be in parallel computing rather than higher clock speeds.

There are other new aspects to CPLEX 11 on the usability front, most notably the ability to generate all optimal solutions, not just one, which seems like a useful feature.

ILOG has put together some forums on optimization and constraint programming. If you hurry, you can be the first to post to them!

New Computer Coming

I just ordered a new computer to replace this one. mat.tepper.cmu.edu has done a great job over the last five years, but it is showing its age. The operating system is woefully out of date, causing ever-increasing security problems. Of course, if it gets out of date enough, the machine becomes “secure by obsolescence” as a buddy of mine (who runs HPUX 9 or so, circa 1995) has deemed his “security measures”.

I ended up going a little crazy on the new machine, ordering a dual quad-core system (that is eight cores). It is amazing how these multi-core systems have taken over. It also means something important for us operations research people: parallelism is important. I have often been down on the COIN-OR people for all the bells-and-whistle they put in their code (I just want a branch-and-cut code: I don’t want to dig through all the stuff about “computing on multiple heterogeneous machines” and so on). But much of their code is already set up for multiple machines (and multiple processors on one machine) and that is going to be important. Multicore is of little use if the software doesn’t take advantage of it. And expecting the operating system to automatically parallelize a complicated optimization program is currently a pipedream. As this article points out, for a lot of software, more cores does no good whatsoever. So hurray for the COIN-OR people for thinking ahead on this.

It is interesting to compare the speed of this system with the listing of supercomputers, as given by Top500 Supercomputers. I can’t get a clear reading on the speed of the machine I am about to buy, but it may be in the 40-80 Gflop range (can this be? Correct me if I am wrong). Based on the historical graph, such a machine would have been the fastest supercomputer in 1993 and would have hit the top 500 at late as 2000. Now under US$7000.