Warranties and Inventory

Jay Swaminathan from University of North Carolina was visiting us today. He gave an interesting talk about how to set inventory levels when warranty replacement is a significant issue. This paper really hit a chord since I am working on my 5th(!) iPod. It seems without fail that my iPod fails after 3 or 4 months, requiring a return shipment and then a new iPod. The only good aspect of this is that the warrantee is reset, so an iPod originally bought in July 2004 is now warrenteed until November 2006. Who needs extended warrantees!

In any case, the paper (Jay together with Wei Huang and Vidhyadhar Kulkarni) made a couple good points: first, for fairly realistic data (they are working with a real, unnamed company), ignoring the warrantee needs in setting inventory can lead to pretty high stock-out charges. The second, less obvious but perhaps more important point, involved some new technology the company was planning to invest in. The company was going to put in a system whereby they would get detailed information on when an item was sold (and which item), rather than the aggregate sales values it was getting. This would give the firm an accurate distribution of the actual ages of the items in the field, rather than just the total numbers. It turns out that there was very little value in the more detailed distribution: aggregate information worked out almost as well. A good example of the value of analysis before making significant investments.

Robust Optimization

Dimitris Bertsimas gave a talk here today on robust optimization. One question he asked was (paraphrasing) “What do you do when reality refuses to match up to the model?”, which I think is a great question. So much of what we do seems to be fragile (think cascading effects of a snowstorm in Chicago stranding travelers in Miami) when we know that are models are based on data that is only an approximation to reality. Robust optimization (roughly, optimizing with an assumption that no more than a certain number of data points are wrong, and each is wrong by no more than a fixed amount) is one way to attack this. Stochastic optimization is another. I am not sure we have found the right method yet (though Dmitris’ work is extremely impressive!).