Skip to content

Bugs and Modeling

The web was all abuzz on December 31 as the 30Gig version of the Microsoft Zune players all stopped working.  What was up?  Was it a terrorist attack?  Solar flares?  A weird Y2K bug almost a decade later?

The truth is a bit prosaic:  there was simply a bug related to leap years.  Since the Zune was not around four years ago, 2008 was the first time for the bug to show itself.  There are descriptions of the bug in numerous places:  here is one if you haven’t seen it yet.  Bottom line:  a section of code for converting “days since January 1, 1980” (when the universe was created) to years, months, and days didn’t correctly handle a leap year, leading to an infinite loop.

It is easy to laugh at such a mistake:  why didn’t a code review or unit testing catch such a simple mistake?  But, it seems, such “simple” parts of the code seem the ones most likely not to get tested.  When you have to test all sorts of complicated things like checking authorization, playing music, handling the interface and so on, who expects problems in a date calculation?  And hence a zillion Zunes fail for a day.

Ooops!

Ooops!

I experienced something similar when I was reviewing some code I use to create a sports schedule.  Never mind the sport:  it doesn’t matter.  But the model I created aimed to have a large number of a particular type game on a particular week.  And, for the last few years, we didn’t get a lot of those games on that week.  This didn’t particularly worry me:  there are a lot of constraints that interact in complicated ways, so I assumed the optimization was right in claiming the best number was the one we got (and this was one of the less important parts of the objective).  But when I looked at the code recently, I realized there was an “off by one” error in my logic, and sure enough the previous week had a full slate of the preferred games.  Right optimization, wrong week.  Dang!

So one of my goals this week, before class starts is to relook at the code with fresh eyes and see what other errors I can find.    There are some things I can do to help find such bugs, like trying it on small instances and turning on and off various constraint types, but one difficult aspect of optimization is that knowing the optimal solution requires … optimization, making it very hard to find these sorts of bugs.

{ 6 } Comments

  1. Marina | January 5, 2009 at 5:01 pm | Permalink

    While “a zillion Zunes” has a zany zing to it, there are, apparently, only about three million of them out there.

    …and in the meantime, the Mac fanatics concerned about the 2008 leap second have been feasting on articles like this one

  2. Patrick Viry | January 6, 2009 at 11:48 am | Permalink

    Software developers are using tools precisely to automate such tests (e.g. JUnit), and no software is allowed to get out of the lab before all test suites display the green light. Writing test suites before writing the first line of code is considered good practice.

    As soon as a model becomes larger than a scholar example, I cannot stress enough that optimization modelers should use the same tools and testing procedures that software developers have been commonly using for decades. This won’t help making sure that a solution is optimal, but at least that it is sensible.

    Also, well structured models help a lot pointing mistakes and typos. I have seen too many “spaghetti code” models where any code review or debugging has become hopeless. Clearly identify and document parameters, write assertions or place debug points at appropriate places, modularize and separate data modeling and optimization modeling, all of this has an important impact on the confidence you’ll have in the code and the time you’ll spend chasing bugs. A good IDE is also pretty useful in this respect.

  3. Matthew Saltzman | January 6, 2009 at 5:40 pm | Permalink

    Validating mathematical models is different in many ways from validating programs. Program unit tests typically provide a selection of inputs for which the output is known. Ideally, all possible paths through the code are exercised. As failures (Dijkstra warns against calling them “bugs”) are detected, regression tests can be added to the unit test framework for a particular unit.

    For mathematical models, it is frequently the case that only trivial instances can be validated in testing. As the problem size is scaled up, validation can become exponentially more difficult. Limited-precision arithmetic introduces anomalies that can be impossible to detect on small, stable examples.

    Spreadsheet models are notoriously difficult to validate. LINDO/CPLEX LP format and MPS format are also difficult to validate, although models of any size are produced by matrix generators, which at least offer the possibility of validation as programs. Algebraic modeling languages and IDEs (OPL, MPL, MOSEL, AMPL, GAMS, Concert, FlopC++, etc.) are much better, but even there, there are difficulties. For one thing, I don’t know of any algebraic modeling system for optimization that includes unit analysis.

  4. Patrick Viry | January 6, 2009 at 6:16 pm | Permalink

    Matthew, you’re totally right about LP and MPS formats, they’re almost impossible to validate.

    > Algebraic modeling languages and IDEs (OPL, MPL,
    > MOSEL, AMPL, GAMS, Concert, FlopC++, etc.) are
    > much better, but even there, there are difficulties. For
    > one thing, I don’t know of any algebraic modeling
    > system for optimization that includes unit analysis.

    So allow me to speak for my own shop: you forgot to mention OptimJ, which is integrated within Eclipse and directly compatible with JUnit. This won’t guarantee that the model is ideal, but helps ensure that solutions are sensible and can easily catch the most common errors such as “off by one” mentioned in the original posting.

  5. Matthew Saltzman | January 7, 2009 at 11:00 am | Permalink

    Patrick-

    Interesting…I will have to look into it more.

    I see the phrase “unit analysis” (as opposed to “unit testing”) is ambiguous, though, in this context. I had in mind the analysis of the units of measure on variables and coefficients to ensure that one adds and compares like quantities. Thus, for example,

    (resource units)/(product unit) X (product units) =
    (resource units)

    and

    $/(product unit) X (product units) = $

    Having the modeling environment tag quantities with units and automatically detect anomalies would be an enormous aid to modelers–especially students, for whom this seems to be a major source of difficulty when they are learning to build models.

  6. Hannu Rummukainen | January 10, 2009 at 10:07 am | Permalink

    I can’t comment on how well it works in practice, but I know that you can define units of measurement and perform unit analysis in the AIMMS modelling environment.