The ACM (the Association for Computing Machinery) has announced 44 new Fellows. A number of them are well-known in the operations research community (some are just plain well-known: can it be that Stephen Cook of NP-completeness fame was not a Fellow before now?). These include:

- Tuomas Sandholm, Carnegie Mellon. Tuomas does an amazing number of things very well. He is an ideal faculty member raising money, training students, and so on. He also runs a company: CombineNet. Much, but not all, of his work is in combinatorial auctions, and he has really revolutionized that area. He also windsurfs. He is someone I am jealous of.
- Michel Goemans, MIT. Has done amazing work in approximation algorithms. I am not a huge fan of approximation algorithms: I like solving real problems and it is not clear that a factor of 2 approximation is of much use, much less a log x/log log x approximation. And while many approximation papers give lip-service to the applications of the models they work with, very rarely are approximation algorithms implemented. That said, Michel does what I admire more than anything: he finds truly new and creative approaches to problems. When Michel finds a new approach (which he seems to do every two years or so), it is worth learning about because his approaches generally lead to a very rich a productive research vein. I am jealous of Michel too.
- Sanjeev Arora, Princeton. Unlike Tuomas and Michel, I do not know Sanjeev personally, so I cannot be jealous of him. But I certainly know his work on randomized approaches to hard problems.

So congrats to all the new Fellows!

First of all, congrats to all three.

I certainly share your jealousy. Especially Michel thrilled me with his unconventional approaches. I remember Michel’s paper on Minimum Bounded-Degree Spanning Trees. Awesome!

I agree with your dislike of approximation algorithms in the sense that a 2 approximation is not necessarily what you would want to see when solving real world problems. Not to talk about the bogus motivations sometimes. Nonetheless, it often happened to me that (general) ideas or patterns that lead to the approximation result in first place can be used in order to derive heuristics for the problem. I guess the easiest examples are, grids that scale with an approximation factor, data rounding, or state space compression.

On the other hand, from the theoretical point of view, some results are *really* nice. Especially from the complexity theoretic point of view in the sense that inapproximability reveals a lot of information about the problem’s inherent structure.