Operations Research Key to MBAs

Stacy Blackman, in the blog “Back to B-School”, has a short summary of the ideas of Matthew Stewart, author of The Management Myth, a book highly critical of of the world of MBA education. Since I primarily teach operations research in the Tepper MBA program, I was heartened by Stewart’s views:

While Stewart believes that highly specialized studies in areas such as process-oriented, operations research can be useful training for managers, it’s the case-study oriented, generalist programs such as Harvard Business School that are less useful. Stewart says this is a problem of content:

In order to produce generalist courses, business school professors have been forced to invent subjects called strategy, called organizational behavior, and so on. They’re pretty much pseudo-sciences, and when you use them as a basis for instruction, you’re really teaching people how to master arcane jargon that has minimal connection to the real world, as opposed to teaching them to really think.


Stewart would like to see MBA programs focus not only on business but on broader subjects that would be useful to developing knowledge and critical thinking, such as political theory or evolutionary biology. At the same time, he believes greater specialization is key. “Forget all this nonsense about general case studies and teach how logistics operations work in a complicated supply organization. Give them a real specialization as opposed to a phony one,” says Stewart.

I wouldn’t go as far as Stewart, at least as projected in these quotes, since I do believe there are useful insights from strategy and organizational behavior, primarily through the more formal, less “war story” teaching that you get at some business schools (including the Tepper School).

Maybe we can see a resurgence of operations research in business schools!

Scheduling the US Open

The New York Times has a nice article on what goes into scheduling the (tennis) US Open. You would think that most of the scheduling is done once the brackets are determined, but that is not the case. While the brackets determine who plays on each day, the assignment of matches to courts and to times of the day is done live, and depends on the outcomes of the matches. Venus Williams wins, and her match goes into a big court at a time best for TV. Venus loses, and her conqueror may be exiled to an outer court at 11AM.

There are lots of things that go into the schedule:

The task is to balance the often conflicting desires of players (who submit match-time preferences before the tournament), coaches (who often have more than one pupil and prefer they play at different times), broadcasters (including three in the United States and a litany of others around the world, each hoping to boost ratings with well-timed slots for particular players) and ticket holders (some holding passes for daytime matches, others with tickets to the prime-time show, all wanting compelling tennis spread evenly throughout their stay).

This is, of course, a great opportunity for operations research: our models are really good at doing this sort of scheduling. The hard part is doing the balancing: what should the objective be?

It appears that the system they have in place primarily tracks things as people hand-schedule:

To demonstrate, Curley and Crossland moved Friday’s matches around, cutting and pasting from one court to another, from one time to another. A matchup, outlined in blue for men and pink for women, would turn red if the computer recognized a problem.

In one case, a player had a doubles match before her singles match — a no-no. In others, the computer flagged too little rest for players. Several noted that coaches had players playing simultaneously.

Clicking on a command called “pairs” showed the two matches whose winners would meet in the next round. Ideally, the matches take place at the same time, giving each winning player the same amount of rest leading into the next match.

This is not the sort of interface or description you would expect if the system was optimizing the schedule.

I have seen a very nice paper on tennis umpire scheduling which talks about scheduling the umpires for the US Open but there the constraints and requirements are a lot clearer. It would be quite challenging to put together a system for scheduling the matches that would allow for the sort of tradeoffs the hand-scheduling provides. But I would love to try to do so!

Operations Research is Taking Over the World

I often have postings that so-and-so from the world of operations research has become a dean or a provost or even a university president. Somewhat more rarely, I can talk about an operations researcher as a baseball player. Operations Research can lead to lots of jobs.

hatoyamaIt can even lead to becoming Prime Minister of the country with the world’s second largest economy. Yukio Hatoyama, Prime Minister designate for Japan, is not just trained in operations research but has a Ph.D. from Stanford in the subject. A search at google scholar suggests that his work was in the area of stochastic models in reliability, though it appears that his academic career was cut short (the google scholar counts suggest fairly … limited interest in his academic work). From an article in the Telegraph (UK):

Yet although Hatoyama refers jokingly to high office as “the family business”, he originally sought to avoid political life. After graduating from Tokyo University he completed an engineering PhD at Stanford University, and was set for an academic career in the US. However, his time in America coincided with the nation’s bicentenary, in 1976, and the presidential election that saw Gerald Ford replaced by Jimmy Carter. For the impressionable 29 year-old, it was an astonishing experience: such changes of government simply didn’t happen in Japan, where, despite political and monetary scandals, his grandfather’s party had remained in power since 1955.

The AP article has a nice one line definition of operations research:

Hatoyama knows the U.S. well, having obtained three postgraduate degrees from Stanford, including a doctorate in operations research, which consists of using mathematical and scientific models to make decisions within organizations.

Congratulations, Dr. Hatoyama! Now show the world how operations research training can be translated into political and economic leadership.

Homeland Security at the OR Forum

The OR Forum area of Operations Research has a new paper and commentaries published on the topic “Homeland Security: From Mathematical Models to Policy Implementation”.  The paper is by Larry Wein, with commentaries by Dick Larson, Eva Lee, Nathaniel Hupert (a doctor in public health) and Dave Alderson.  I think it is a very interesting article and very nice commentaries.  I particularly liked Dr. Hupert encouraging more of us in operations research to look at these problems:

What is needed is a new generation of engineers who can speak the language of health care (and vice versa), and who then step into the unknown in much the way Prof. Wein describes to discover the important unsolved (or avoided, as the case may be) problems.

I also enjoyed Dave Alderson’s querying of how to make this work. How do we get more people to have an effect on policy? He suspects it is not quite as easy as Larry makes it look.

Eva Lee talked about her work with the Centers for Disease Control on better systems for supplying emergency medical supplies, which I think is an extremely important and interesting issue. Dick Larson harkened back to World War II, and suggests we need a bit more social science to truly understand how people react in crisis.

I am the Area Editor for this, of course, and this is about the eighth OR Forum paper published. I like all of the papers, but I thought the commentaries on this one were particularly interesting.

If you have a thought on the paper or commentaries, may I encourage you to provide that at the Operations Research site? Hundreds (or thousands) of people download the paper and the commentaries, but we still don’t get much discussion there.

Good Day for the Tepper School

I am not at Mathematical Programming, but fearless blogger Tallys Yunes is, and he reported on the prizes given out yesterday (Tallys is also an example of the excellent students that come out of the Tepper School:  a theme for this post).  The Tepper School (Carnegie Mellon’s business school) did well, with Gérard Cornuéjols (faculty member at Tepper) winning the Dantzig Prize and Mohit Singh (recent graduate of our Algorithms, Combinatorics and Optimization Ph.D. program) winning the Tucker Prize.

The Dantzig Prize is given to the “one or more individuals for original research which by its originality, breadth and depth, is having a major impact on the field of mathematical programming.”   The amazing thing about Gérard is that there are a number of distinct lines of research for which he could have been awarded this.   In particular, he has done tremendous work in both integer programming and in characterizing and identifying various forms of matrices (ideal, perfect, balanced, and so on).  He has also found time to put together a book on Optimization Methods in Finance based on his very successful Masters level course here.

The Tucker Prize is the doctoral dissertation prize for the Mathematical Programming Society.  Mohit, now at Microsoft Research, did his dissertation with Tepper faculty member  R. Ravi.  Mohit has a number of nice papers on approximations for network design problems (and other things, including stochastic versions of hard combinatorial problems), using  new techniques he developed.  I am not normally a huge fan of approximation algorithms (“So our profits are within a factor of 3 of what is possible?  Umm….”), but I really liked Mohit’s disseration.  The iterative methods he developed are very powerful and might even have some practical use!  We have had an extremely impressive group of students come out of our ACO program, particularly since we only admit a handful of students every year among the three groups.

There are not too many universities left where serious mathematical programming is done in the business school:  I am proud to be part of one that is doing work at the very highest level.  The only downside is that once in while my dean complains that I should publish more:  “Look at your colleagues  Balas, Cornuejols, Ravi….” And I guess he is right, though that is a pretty tough standard to meet.

Added 11:20AM August 24. Turns out I hit the topics for Gerard’s prize correctly.  Here is the citation:

“You have been selected as the winner of the 2009 Dantzig Prize, in recognition for your deep and wide-ranging contributions to mathematical programming, including your work on Balanced and Ideal Matrices and Perfect Graphs and your leading role in the work on general cutting planes for mixed integer programming over many years covering both theory and computation.”

Not at ISMP

The International Symposium on Mathematical Programming of the Mathematical Programming Society occurs every three years, and I generally like to attend them.  They are like INFORMS conferences in size, but have many more talks (and people!) that I want to see.  This year’s ISMP is being held next week in Chicago.  Unfortunately, I won’t be there:  I will be in Cork Ireland attending an advisory board meeting of the Cork Constraint Computation Center.

If anyone is blogging from ISMP, please let me know so I can be sure to point to your entries.

Note added August 26. Check out the comments for pointers to Nathan Brixius and Tallys Yunes who are blogging the conference.

Social Engineering for the Overeducated

I got an interesting email today.  Ostensibly from Prof. Jochem Koos (a good Dutch name) from Elsevier, the email gives editorial and review policies for its journals.  After stating that referees must be of the highest quality, the letter then asks potential editors and reviewers to fill out a form to certify their credentials and be added to their list.   The bad part is that it costs $100 to be added to the list.  The good part is that referees are to be paid $30 per page reviewed.  $30!  I just finished a 50 page monster for an Elsevier journal.  I could get $1500?!  Wow!  Does that go for revisions too?  I could get that paper in an endless series of revisions at $1500 per round.  “There is a comma misplaced on page 26.  I will need to see a revision before I recommend publication.”    Ka-ching!  And if I am running a bit short on cash a few reviews of the form “Interesting paper but I would like to see it … 13 pages longer” should be just the thing for making the mortgage payment.

Of course, it is all a ruse.  Elsevier does not pay $30/page for refereeing (at least, not in operations research) and you don’t have to pay $100 to get on an approved list to referee.

It is surprising that a group that overwhelmingly has PhDs can be a target for such a scam.  On the other hand, considering some of the people I have met in my professional life, perhaps it is not that surprising.

Bottom line:  don’t send off $100 to just anyone with a gmail address.  Elsevier has some more information on these sorts of emails.

Competition then Cooperation: More on the Netflix Challenge

Wired has a nice article on the teams competing to win the Netflix Prize (thanks for the pointer, Matt!).  I think the most interesting aspect is how the “competition” turned into a cooperation:

Teams Bellkor (AT&T Research), Big Chaos and Pragmatic Theory combined to form Bellkor’s Pragmatic Chaos, the first team to qualify for the prize on June 26 with a 10.05 percent improvement over Netflix’s existing algorithm. This triggered a 30-day window in which other teams were allowed to try to catch up.

As if drawn together by unseen forces, over 30 competitors — including heavy hitters Grand Prize Team, Opera Solutions and Vandelay Industries, as well as competitors lower on the totem pole — banded together to form a new team called, fittingly, The Ensemble.

In fact, with a bit more time, all those groups might have come together:

As much as these teams collapsed into each other during the contest’s closing stages, they might have mated yet again to ensure that everyone on both qualifying teams would see some of the $1 million prize. Greg McAlpin of The Ensemble told Wired.com his team approached Bellkor’s Pragmatic Chaos and asked to join forces (he later clarified that he was still part of Vandelay Industries at this point), but was spooked by AT&T’s lawyers.“We invited their whole team to join us. The first person to suggest it was my 11-year-old son,” said McAlpin. “I thought it sounded like a really good idea, so I e-mailed the guys from Bellkor’s Pragmatic Chaos… [but] they said AT&T’s lawyers would require contracts with everyone to make agreements about who owns intellectual property… and it would take too long.”

Data mining methods can easily be combined.  A simple way is simply to have algorithms vote on the outcome.  This can often result in much better answers than any individual technique.  The Ensemble clearly did something like that:

To combine competitors’ algorithms, The Ensemble didn’t have to cut and paste much code together. Instead, they simply ran hundreds of algorithms from their 30-plus members (updated) and combined their results into a single set, using a variation of weighted averaging that favored the more accurate algorithms.

This is a great example of the intellectual advances that result from long-term “competitions”.  After a while, it is no longer competitors against competitors but rather all the competitors against the problem.  I don’t know if the Netflix Challenge was specifically designed to result in this, but it has turned out wonderfully.  It is much less likely that the groups would have gotten together if the contest was simply “Who has the best algorithm on August 1, 2009”.   The mix of initial competition (to weed out the bad ideas) and final cooperation (to get the final improvements) was extremely powerful here.

The winner announcement is expected in September.

Mittelmann’s Benchmarks CPLEX verus Gurobi

Hans Mittelmann has some new benchmarks comparing CPLEX 12.1 with GUROBI 1.1.2 on various mixed integer linear programming instances (I last wrote on these benchmarks last January with earlier versions of both codes:  be sure to check out the comments from that post since many of those comments apply to this also).  He covers both feasibility and optimization problems.

Checking the computation times for one set of optimizatin instances:

  problem     CPLEX1    GUROBI1     CPLEX4    GUROBI4
------------------------------------------------------
bc1            101         94         81         51
bienst2        165         58         87         27
dano3_5        148        618        276        362
mark._4_0      225         45         86         36
mark._5_0        f       6909          f       4368
mkc1             5        113          5
qap10           51         76         48         73
seymour1       178        285        116        161
swath2          13         10          5          6
swath3          26        563         49         53
30_05_100        7         11          4         11

The CPLEX1 and GUROBI1 use one processor; the …4 versions use 4.  The “f” means optimality was not proved in 7200 seconds (all times in seconds).   The above table is not representative, so be sure to check out Hans’ page.

A couple conclusions that I come up with (your mileage may vary):

  • Gurobi is a very impressive code:  whether CPLEX or Gurobi will be faster on a particular instance is not predictable, which is an impressive feat.
  • Parallel speedup is quite unpredictable.  Often there is no, or minimal, speedup;  sometimes there is huge speedup;  sometimes there is a slowdown.  None of this is surprising:  the initial linear relaxation is hard to parallelize, and the nondeterministic nature of tree search can lead to getting “lucky” or “unlucky” in the parallel search.

If we pull a few lines from the feasibility benchmarks:

============================================================
problem        CPLEX    FP-ab     FP-bfl    SCIP     GUROBI
------------------------------------------------------------
atlanta-ip      848        23        21       391       189
core4872-1529     1       413        52       374         1
ds                1         -      5350        38         3
germanrr        183        10         6       200        19
momentum1         1        23        17      6257         2
momentum2      6181        47        71      1504      4118
momentum3         1       350       627         -         4
msc98-ip        185        19        25      1754        33
neos16           47       236       241        43        72
neos-506428    1338      2969      4531      1739      2694

(FP are Feasibilty Pump approaches; SCIP is a fast non-commercial code).   Here the results are all over the map:  practically anything can win and anything can lose and there can be  three orders of magnitude difference between running times.

Now speed is not the only thing to look for in an optimization code (support, interfaces, price are a few other things to look at), but these benchmarks are very useful in trying to compare codes.  Thanks Hans!