Skip to content

Knuth and Multicore Systems

Donald Knuth has been publishing “fascicles” from his Volume 4 (Combinatorial Algorithms) of his epic The Art of Computer Programming. These are shortish (100-150 page) sub-chapters of a work on an area that expands faster than Don can write. You can download some of the preliminary versions on Knuth’s page to get the flavor.

Knuth was interviewed by Andrew Binstock of InformIT. Great interview with very provocative questions and answers! I particularly liked the exchange on parallel algorithms and multicore systems. Don is not a fan:

Andrew: One of the emerging problems for developers, especially client-side developers, is changing their thinking to write programs in terms of threads. This concern, driven by the advent of inexpensive multicore PCs, surely will require that many algorithms be recast for multithreading, or at least to be thread-safe. So far, much of the work you’ve published for Volume 4 of The Art of Computer Programming (TAOCP) doesn’t seem to touch on this dimension. Do you expect to enter into problems of concurrency and parallel programming in upcoming work, especially since it would seem to be a natural fit with the combinatorial topics you’re currently working on?

Donald: The field of combinatorial algorithms is so vast that I’ll be lucky to pack its sequential aspects into three or four physical volumes, and I don’t think the sequential methods are ever going to be unimportant. Conversely, the half-life of parallel techniques is very short, because hardware changes rapidly and each new machine needs a somewhat different approach. So I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity.

Andrew: Vendors of multicore processors have expressed frustration at the difficulty of moving developers to this model. As a former professor, what thoughts do you have on this transition and how to make it happen? Is it a question of proper tools, such as better native support for concurrency in languages, or of execution frameworks? Or are there other solutions?

Donald: I don’t want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks!

I have been struggling to take advantage of my (now not so-)new computer and its 8 cores. Since about 90% of my used CPU cycles are done in CPLEX, and I only have a single-core license, I am actually running at about 10% capacity on that machine. And my efforts to write some specialized codes have not been successful, though that is perhaps more due to lack of time and effort than any inherent difficulty. Does the new generation of OR researchers understand and feel comfortable with multi-core programming? Or are we now just going to stall in terms of computation speed in practice?

{ 2 } Comments

  1. Brian Borchers | April 26, 2008 at 9:30 pm | Permalink

    Getting parallel codes to work efficiently and correctly is devilishly hard work, at least with the tools that are widely used today.

    Having said that, I think that lots of undergraduate CS students are taking classes in parallel computing and trying to learn how to use OpenMP, pthreads, and MPI. Most of them can’t seem to translate this into any ability to actually use these tools to solve real problems.

    I’m of the opinion that new programming paradigms and programming languages will be needed before we can really make effective use of systems with dozens of cores.

  2. Johann Dréo | April 28, 2008 at 5:49 am | Permalink

    The advantages of having multiple processors does not only lie in parallel codes, but also in codes running in parallel.

    In true life application of OR, there is often problems with probabilistics aspects. You may have a problem with uncertainty, you may accept a sub-optimal solution if it is produce in a very short time, you may want a robust algorithm, which performances does not depends of the instance of your problem, etc.

    Thus, you may need to do some expertimental research, with a lot of runs repetition : you may need a lot of processing power for independent tasks.

    Here, the more processors you have, the happier you are.