The American Scientist Magazine Understands Nothing about the Traveling Salesman Problem

I like to think there are some solid foundations to my life. I will be able to do the Monday New York Times crossword and I will not be able to do the Thursday version. My dental hygienist will not be satisfied with the amount of flossing I do. I will get my favorite spaghetti meal served on my birthday.

And I can trust American Scientist to write articles that teach me science in an accessible, yet challenging, way. American Scientist is the magazine of the honor society Sigma Xi and is my favorite science magazine since the demise of The Sciences, my all-time favorite science magazine. Or it was.

That particular pillar of my existence crumbled into dust today as I was reading an otherwise interesting article on “Paradoxes, Contradictions, and the Limits of Science” by Noson Yanofsky. In this article, there is a sidebar on the Traveling Salesman Problem that makes me weep.

It is fortunate that they have put this behind their firewall so as to minimize the damage they have done. But let me briefly describe: there is a map of the United States, along with a truly atrocious Traveling Salesman tour through the US, apparently visiting each zip code (approximately 43,000 zip codes). The tour is created by “Hilbert Curves” and is “about 75% optimal”. Then, the inevitable, “Finding the actual shortest route would take a computer essentially an infinite amount of time to compute”. Cue the sidebar: “But what about going to 100 different cities? A computer would have to check 100x99x98x….x2x1 possible routes.” They then helpfully expand that into the 157 digit number of potential routes. “The computer would have see how long the route takes, and then compare all of them to find the shortest route.”

The sidebar continues with the drivel of computer speed and the number of centuries it would take.

No, no. 100! times no. It is not necessary to check all the routes. 100 cities TSPs can be solved to proved optimality in a very short period of time. And even 43,000 zip codes can be solved to optimality with a non-infinite (and quite practical) amount of computation. Using integer programming and specialized techniques, Bill Cook and his gang at Concorde have solved to optimality problems with up to 85,900 cities. The “zip code” problem is just like the problems Cook has solved, just smaller.

The Traveling Salesman problem is NP-complete (or NP-hard depending what you mean by the TSP). But as a computational approach, the “figuring out how big 100! is and that is the time” completely misunderstands the role optimization and algorithms play in truly solving hard problems (I mean it: the true optimal is found and proved to be optimal). My most recent blog post (sadly too long ago) was on this very subject. Perhaps I should rename this blog “Michael Trick’s Rants about Complete Enumeration Arguments”.

Even if you want to call this instance impossible to solve, why would you ever use a Hilbert Curve heuristic for it? Don’t get me wrong: I am a student of John Bartholdi and his paper with Loren Platzman on spacefilling curves and his paper applying this to Meals-on-Wheels scheduling changed my life. But these techniques don’t give great tours (they have lots of other great properties).

It appears American Scientist has taken this tour from this webpage. The published map is the bottom one on the page. The resulting tour is clearly not good. Practically any other modern heuristic would do better. This is no complaint about Robert Kosara, the author of the original article: the page describes what he wanted to do, and he did it well. But for American Scientist to put this forward as an example of the state-of-the-art heuristic approach completely misrepresents the state of the art for heuristic approaches. And I have no idea what 75% optimal means: the tour is longer than optimal, of course.

I don’t know if this means I should not trust American Scientist in areas that are not my specialty. On something I know a bit about, the magazine has failed miserably.

Quantum computing and operations research

Quantum computing is one of those weird ideas out there that will either change the world or go the way of  cold fusion, so I periodically think I need to learn more about the area. I am by no means an expert (or even knowledgeable) in the area.  My expertise comes from reading Scott Aaronson’s blog, which makes me as knowledgeable as about about a million other people out there.   Seeing a series of papers showing how quantum computing can factor numbers is interesting, until you realize that the number involved are still well in the range of what my eight year old can do in his head.  There are a lot better ways to spend my research time.

So, when a paper comes out involving using quantum computers to solve NP-hard problems (for a paper just presented at the ACM Conference on Computing Frontiers), I didn’t exactly leap at the opportunity to read it.  That was until I saw the senior author: Cathy McGoech (the other coauthor is doctoral student Cong Wang).  Cathy is the real deal.  I have met her a few times, but I don’t know her well personally.  But her research has had a big effect on me.  Cathy, together with David Johnson (of Johnson and Garey fame) began the DIMACS Challenges twenty years ago.  The goal in these challenges is to look at real-life algorithmic performance.  The two of them ran the first Challenge (on Network Flows and Matchings);  I ran the second Challenge, on cliques, coloring, and satisfiability.  Cathy was extremely influential in how I thought about the Challenge.  In particular, how can one distinguish between “fast coding” and “fast algorithms”?  And what do interesting computational questions look like?  I certainly never got to her level of sophistication when it comes to experimental design and evaluation, but her work made me better.

My research has moved off in my dillettante way to other things;  Cathy has continued to work very hard to put experimental algorithmics on a solid footing.  She even has a new book out on the topic (which I just ordered).

All this means: if Cathy is saying something about a computational evaluation, it is going to be worth listening to.

Short synopsis:  Cathy and her coauthor got access to a D-Wave Two platform, heuristically solved a number of instances of three NP-Hard problems (Quadratic Unconstrained Binary Optimization, Weighted Maximum Two-Satisfiability, and Quadratic Assignment),  and generally performed much faster than methods such as CPLEX.

The D-Wave platform has caused some controversy, since it is unclear what exactly happens in the system.  The “holy grail”, as I understand it (it is not Aaronson’s fault if I get this wrong!) is quantum entanglement.  This sort of “action at a distance” is key to much of the theory behind quantum computing.  Does D-Wave get this?  The paper covers this:

Finally there has been some debate as to whether D-Wave chips form a ‘truly’ quantum system; this has been demonstrated for smaller (8 and 16 qubit) systems … but
not, as yet, for the full hardware graph. Boixo et al. 
report that the hardware demonstrates quantum tunneling
(which would be inconsistent with classical annealing), and
find some evidence of qubit entanglement. A recent review
article in Science [ Science, 13 May 2011] concludes that the hardware is at least a little quantum mechanical.

I suppose the researchers ended up at least a little tenured for this work.

But how does it work?  Generally, pretty well:

In all three experiments the V5 hardware or its hybrid counterpart Blackbox found
solutions that tied or bettered the best solutions found by
software solvers. Performance was especially impressive on
instances that can be solved directly in hardware. On the
largest problem sizes tested, the V5 chip found optimal so-
lutions in less than half a second, while the best software
solver, CPLEX, needed 30 minutes to find all optimal solutions.

We have also carried out some tests to compare V6 to
the software solvers; very preliminary results suggest that
on QUBO instances with n = 503, the hardware can find
optimal solutions around 10,000 times faster than CPLEX.

Pretty impressive sounding, and certainly something worth paying attention to.

But still, it is important to understand some key aspects of this work:

1) The solutions found are “just” heuristic solutions.  There is no proof of optimality, nor can there be with the setup being used.  In essence, we are getting a very fast population search with what looks like simulated annealing.

2) The comparator is primarily CPLEX, a code designed for finding and proving optimal solutions.  I have refereed 3 papers in the last month alone that give heuristics that are orders of magnitude faster than CPLEX in finding good solutions.  Of course, CPLEX is generally infinitely faster in proving optimality (since it wins whenever it proves optimality).  Beating CPLEX by orders of magnitude on some problems in finding good solutions (particularly with short times) does not require fancy computing.

3) The problems chosen are not exactly in CPLEX’s sweet spot.  Consider maximum weighted 2-SAT, where the goal is to maximize the weight of satisfied clauses in a satisfiability problem, where each clause has exactly two variables.  The natural linear relaxation of this problem results in a solution with all variables taking on value .5, which gives no guidance to the branch-and-bound search.  Now CPLEX has a lot of stuff going on to get around these problems, but there are certainly NP-Hard problems for which CPLEX has better relaxations, and hence better performance.  And if you take any sort of problem and add complicating constraints (say, max weighted 2-sat with restrictions on the number of  TRUE values in various subsets of variables), I’ll put my money on CPLEX.

4) The D-Wave system is still a bit pricey:  at about $10,000,000, there are cheaper ways to get the sort of speedup that McGeoch and Wang found.

Cathy understands the comparator issue:

It would of course be interesting to see if highly tuned
implementations of, say, tabu search or simulated annealing
could compete with Blackbox or even QA on QUBO prob-
lems; some preliminary work on this question is underway.

Despite these qualms, Cathy has gotten me interested.

Google too, by the looks of it.

Be sure to check out Scott Aaronson’s take on this, as he resumes his role as “Chief D-Wave Skeptic”.

Registries To Avoid Publication Bias

I have been thinking about the issue of how a field knows what they know.  In a previous post, I wrote about how the field of social psychology is working through the implications of fraudulent research, and is closely examining the cozy interactions between journals, reviewers, and famous researchers.   And any empirical field based on statistical analysis has got to live with the fact that if there 1000 results in the field, some number (50 perhaps, if p=.05 is a normal cutoff and lots of results are just under that value) are going to be wrong just because the statistical test created a false positive.  Of course, replication can help determine what is real and what is not, but how often do you see a paper “Confirming Prof. X’s result”?  Definitely not a smooth path to tenure.

This is worse if malevolent forces are at work.  Suppose a pharmaceutical company has bet the firm on drug X, and they want to show that drug X works.  And suppose drug X doesn’t work.  No problem!  Simply find 20 researchers, sign them to a non-disclosure, and ask them to see if drug X works.  Chances are one or more researchers will come back with a statistically significant result (in fact, there is about a 65% chance that one or more will, given a p=.05).  Publish the result, and voila!  The company is saved!  Hurray for statistics and capitalism!

Fortunately, I am not the first to see this issue:  way back in 1997, the US Congress passed a bill requiring the registration of clinical trials, before the trials get underway.

The first U.S. Federal law to require trial registration was the Food and Drug Administration Modernization Act of 1997 (FDAMA) (PDF).

Section 113 of FDAMA required that the National Institutes of Health (NIH) create a public information resource on certain clinical trials regulated by the Food and Drug Administration (FDA). Specifically, FDAMA 113 required that the registry include information about federally or privately funded clinical trials conducted under investigational new drug applications (INDs) to test the effectiveness of experimental drugs for patients with serious or life-threatening diseases or conditions.

This led to the creation of clinicaltrials.gov (where I am getting this history and the quotes) in 2000.  This was followed by major journals requiring registration before papers could be considered for publication:

In 2005 the International Committee of Medical Journal Editors (ICMJE) began to require trial registration as a condition of publication.

The site now lists more than 130,000 trials from around the world.  It seems this is a great way to avoid some (but by no means all!) fraud and errors.

I think it would be useful to have such systems in operations research.  When I ran a DIMACS Challenge twenty years ago, I had hoped to keep up with results on graph coloring so we had a better idea of “what we know”:  then and now there are graph coloring values in the published literature that cannot be correct (since, for instance, they contradict published clique values:  something must be wrong!).  I even wrote about a system more than two years ago but I have been unable to find enough time to take the system seriously.  I do continue to track results in sports scheduling, but we as a field need more such systems.

 

Benchmarks: Coloring, Sports and Umpires

I have always felt strongly that operations research needs more libraries of instances for various problem classes, along with listings of current best solutions.  By tracking how well we solve problems over time, we can show how we advance as a field.  It also makes it easier to evaluate new work, making both authors and referees work easier.

I began this direction almost two decades ago when I spent a year at DIMACS (a fantastic research institute on discrete mathematics and computer science based at Rutgers) when I (together with David Johnson) ran their Computational Challenge, with an emphasis on solving graph coloring, clique, and satisfiability instances.  From that, I put together a page on graph coloring (which has to be one of the oldest pages on the internets!)   David, Anuj Mehrotra and I followed that up in 2003 with an updated challenge just on graph coloring.   It was great to see people re-use the same instances, so we could understand the advances in the field.  It is hard to tell exactly how many papers have used the various benchmark repositories, but it is clearly the basis for hundreds of papers, based on google scholar hits on the DIMACS paper referencing the instances.

I had this experience in mind ten years ago when Kelly Easton, George Nemhauser and I wanted to publish about work we had done with Major League Baseball in their scheduling.  It made no sense to use MLB as a benchmark, since there is only one instance per year and much of the information about the needs of a schedule is confidential.  So we created the Traveling Tournament Problem that abstracts two key issues in MLB scheduling: travel distance, and “flow” (the need to mix home and away games).  We created a set of instances, solving a few smaller ones, and let it loose on the world.  The result was fantastic:  dozens of groups started working on the problem, and we could clearly see which techniques worked and which didn’t.

I had made a terrible mistake when creating benchmarks for graph coloring.  I didn’t keep track of best results.  This led to a fair amount of chaos in the field, with contradictory results appearing (claimed coloring values better than claimed lower bounds), and no clear picture of where things are going.  I had thought at one time that I would try to clean things up with a “Repository of Optimization Instances and Solutions”, but too many other things have intruded for me to spend the time necessary on that.  Fortunately, Stefano Gualandi and Marco Chiarandini have put together a site for graph coloring solutions, and I hope they will be successful in their efforts to put a little structure in the field.

I learned from that mistake and was much more diligent about keeping track of solutions for the Traveling Tournament Problem.  The TTP site is always up to date (OK, almost always), so people can reasonably trust the results there.  I have recently extended the site to include instances for non-round-robin scheduling and for the Relaxed TTP (where there is an opportunity for off-days).

One relatively new problem I am excited about is scheduling umpires in sports.  Former doctoral students Hakan Yildiz (now at Michigan State) and Tallys Yunes (Miami) and I developed a problem called the Traveling Umpire Problem which again tried to abstract out key issues in Major League Baseball scheduling.  In this case, the umpires want to travel relatively short distances (unlike the players, the umpires have no “home city”, so they are always traveling) but should not see the same teams too often.  This problem feels easier than the Traveling Tournament Problem, but we still cannot solve instances with 14 or more umpires to optimality.  This work received a fair amount of interest when the university PR people caught hold of our Interfaces paper.  Since that paper, Hakan and I have put together a couple of other papers, exploring optimization-based genetic algorithms and benders-based local search approaches for this problem (to appear in Naval Research Logistics).  Both papers illustrate nice ways of using optimization together with heuristic approaches.  The website for the problem gives more information on the problem, along with instances and our best solutions.

I don’t think my repositories of benchmarks will be as influential as, say MIPLIB, which focuses on mixed integer programs.  But I do like to think that they make the operations research world run a bit smoother.

Call for Challenging MIP Problems

MIPLIB is a collection of instances of Mixed Integer Programs.  Versions of MIPLIB have been around since 1992, and these have been invaluable as we see how we have advanced in solving difficult MIP instances. Some instances that were essentially unsolvable twenty years ago can now be solved in a fraction of a second.  We know we are getting better!  Of course, there is a downside:  maybe we are just getting better at solving MIPLIB instances.  For instance, here is an extract from an excellent optimization code to attack MIPLIB:

if (problem == "team10") then {
       sol_val = 924;
    }
else if (problem == "a1c1s1") then { ...

Now, I don’t claim that any real code cheats this blatantly, but it is inevitable that as codes use benchmarks like MIPLIB, they will become tuned to the issues that come from those instances.

This makes it important that MIPLIB reflect a broad range of issues faced “in the real world”.  So it is very good that the MIPLIB people (a group based at ZIB in Berlin, but including people from all over) are updating the library to create MIPLIB 2010.  If you have hard or real-life MIP instances, particularly if they are taking 15 minutes to 2 hours with commercial codes, you are welcome to upload those instances for consideration for the new library.  The more varied the library, the better our field will be.

There is one other biasing aspect that an update to MIPLIB 2010 does not fix and that is the emphasis on talking about integer programs as MPS files.  Once an instance is in an MPS file, it is often extremely difficult to back out the underlying structure.  Our constraint programming brethren are happy to talk about formulations with global constraints like alldifferent and to have codes that explicitly take advantage of those structures. I do wonder if we would be better at solving integer programs if we talked about them as higher-level structures (like tour(x1,x2,x3) and matching(x,y) and other common substructures).  By limiting ourselves to MPS format, the structures we exploit tend to be those easiest to find, like knapsack and setcovering).

But that is a topic for the future:  right now, we need to be sure MIPLIB 2010 is as varied and interesting as it can be!

Eating Better and Better Routing

For the last year or so, my wife and I have decided to eat better by doing more “real” cooking.  A great help in this has been a magazine “Real Simple“.  Every month, the magazine publishes a series of recipes, each generally requiring only 20-30 minutes of preparation time.  We like these recipes because they use real ingredients:  none of this “Pour a can of cream of celery soup over everything”.  We’ve agreed to cook everything, whether it sounds appealing or not, and of the dozens of recipes we have gone through, essentially all of them were edible, with most very good and a few definite keepers (*).   The authors of the recipes do seem to have a fondness for leeks and fennel, but we have grown used to that.   Alexander, my six year old son, eats the same food of course, and generally approves of what we are cooking.

I was delighted with this month’s issue where they had a short blurb on the website route4me.com.  The description appeals to their readership:

You need to get to the library before closing, but you also have to pick up the dry cleaning, the kids from school (don’t forget that one), and the inevitable snack along the way.  Enter up to 10 addresses on this site and it will calculate the shortest route to get it all done, complete with driving directions.

The Traveling Salesman Problem makes an appearance in our cooking magazine!  Naturally I went over to the site, and checked it out by putting in a few cities (seems a limit of 6 but maybe signing up gets you more): Pittsburgh, Cleveland, Atlanta, Winnipeg, Minot (ND), and Los Angeles.  Clicked “round trip” to get back home and ended up … with a somewhat surprising route:

Hmm… that crossing in Illinois is a little suspicious.  This doesn’t look like the optimal route.  Is it? Maybe it is hard to get from Cleveland to Winnipeg due to the lakes?  Perhaps here was an example were the underlying road network really has a strong effect on the optimal tour.

I checked things out, and compared this route to the route going from Pittsburgh-Cleveland-Winnipeg-Minot-LA-Atlanta-Pittsburgh.  Turns out the route from route4me is about 10 hours (driving) longer than the crossing-free route.  What kind of optimization is this?

It took a bit more playing around before I figured out what route4me was doing.  Their definition of a “round trip” is the minimum path visiting all the cities from the starting point, followed by going from the final city back to the starting point.    The best path is Pittsburgh-Cleveland-Atlanta-Winnipeg-Minot-LA;  they then just add in the LA-Pittsburgh leg.  Kind of a funny definition, but I am sure they document it someplace.

Overall, I think I will stick with Real Simple for telling me how best to prepare kale, and leave the traveling salesman information to other sources.

[Thanks to Ilona for pointing out the blurb in the magazine.]

(*)  Our favorite recipe so far has been “Scallops with Sweet Cucumber and Mango Salsa”.  Call it the official recipe of Michael Trick’s Operations Research Blog!

Serves 4 Hands-On Time: 25m Total Time: 25m

Ingredients

  • 1 cup long-grain white rice (such as jasmine)
  • 2 mangoes, cut into 1/2-inch pieces
  • 2 Kirby cucumbers or 1 regular cucumber, peeled and cut into 1/2-inch pieces
  • 1 tablespoon grated ginger
  • 2 teaspoons fresh lime juice
  • 2 tablespoons extra-virgin olive oil
  • 1/2 cup fresh cilantro, chopped
  • kosher salt and pepper
  • 1 1/2 pounds large sea scallops

Directions

  1. Cook the rice according to the package directions.
  2. Meanwhile, in a medium bowl, combine the mangoes, cucumbers, ginger, lime juice, 1 tablespoon of the oil, the cilantro, 1/2 teaspoon salt, and 1/8 teaspoon pepper; set aside.
  3. Rinse the scallops and pat them dry with paper towels. Season with 1/4 teaspoon salt and 1/8 teaspoon pepper. Heat the remaining oil in a large skillet over medium-high heat. Add the scallops and cook until golden brown and the same color throughout, about 2 minutes per side. Divide among individual plates and serve atop the rice with the salsa.

Solver Foundation Version 2 Announced

Just in time for the INFORMS Meeting in San Diego, the Microsoft Solver Foundation folks have announced Solver Foundation Version 2. I am particularly excited about this since I have promised (and have been negligent about) a new course this year entitled “Operations Research Implementations” aimed at our MBA students. The idea is to go beyond the 20 variable, 20 constraint models into some “real” models. We (I am doing this with Willem van Hoeve) are looking at a number of optimization systems on which to base this course. Our students are extremely comfortable with Excel, and the screen shot from the post is quite provocative. The students are generally not programmers, but maybe this is something they can navigate.

Lots of new additions to Solver Foundation 2, including automatic handling of uncertainty, better MIP performance through Gurobi 2.0 as the default solver, constraint programming improvements and much more. I wonder if our MBAs are ready for Second Order Conic Programming?

Mittelmann’s Benchmarks CPLEX verus Gurobi

Hans Mittelmann has some new benchmarks comparing CPLEX 12.1 with GUROBI 1.1.2 on various mixed integer linear programming instances (I last wrote on these benchmarks last January with earlier versions of both codes:  be sure to check out the comments from that post since many of those comments apply to this also).  He covers both feasibility and optimization problems.

Checking the computation times for one set of optimizatin instances:

  problem     CPLEX1    GUROBI1     CPLEX4    GUROBI4
------------------------------------------------------
bc1            101         94         81         51
bienst2        165         58         87         27
dano3_5        148        618        276        362
mark._4_0      225         45         86         36
mark._5_0        f       6909          f       4368
mkc1             5        113          5
qap10           51         76         48         73
seymour1       178        285        116        161
swath2          13         10          5          6
swath3          26        563         49         53
30_05_100        7         11          4         11

The CPLEX1 and GUROBI1 use one processor; the …4 versions use 4.  The “f” means optimality was not proved in 7200 seconds (all times in seconds).   The above table is not representative, so be sure to check out Hans’ page.

A couple conclusions that I come up with (your mileage may vary):

  • Gurobi is a very impressive code:  whether CPLEX or Gurobi will be faster on a particular instance is not predictable, which is an impressive feat.
  • Parallel speedup is quite unpredictable.  Often there is no, or minimal, speedup;  sometimes there is huge speedup;  sometimes there is a slowdown.  None of this is surprising:  the initial linear relaxation is hard to parallelize, and the nondeterministic nature of tree search can lead to getting “lucky” or “unlucky” in the parallel search.

If we pull a few lines from the feasibility benchmarks:

============================================================
problem        CPLEX    FP-ab     FP-bfl    SCIP     GUROBI
------------------------------------------------------------
atlanta-ip      848        23        21       391       189
core4872-1529     1       413        52       374         1
ds                1         -      5350        38         3
germanrr        183        10         6       200        19
momentum1         1        23        17      6257         2
momentum2      6181        47        71      1504      4118
momentum3         1       350       627         -         4
msc98-ip        185        19        25      1754        33
neos16           47       236       241        43        72
neos-506428    1338      2969      4531      1739      2694

(FP are Feasibilty Pump approaches; SCIP is a fast non-commercial code).   Here the results are all over the map:  practically anything can win and anything can lose and there can be  three orders of magnitude difference between running times.

Now speed is not the only thing to look for in an optimization code (support, interfaces, price are a few other things to look at), but these benchmarks are very useful in trying to compare codes.  Thanks Hans!

Careful with Wolfram|Alpha

Wolfram|Alpha is an interesting service. It is not a search engine per se. If you ask it “What is Operations Research” it draws a blank (*) (mimicking most of the world) and if you ask it “Who is Michael Trick” it returns information on two movies “Michael” and “Trick” (*). But if you give it a date (say,  April 15, 1960), it will return all sorts of information about the date:

Time difference from today (Friday, July 31, 2009):
49 years 3 months 15 days ago
2572 weeks ago
18 004 days ago
49.29 years ago

106th day
15th week

Observances for April 15, 1960 (United States):
Good Friday (religious day)
Orthodox Good Friday (religious day)

Notable events for April 15, 1960:
Birth of Dodi al-Fayed (businessperson) (1955): 5th anniversary
Birth of Josiane Balasko (actor) (1950): 10th anniversary
Birth of Charles Fried (government) (1935): 25th anniversary

Daylight information for April 15, 1960 in Pittsburgh, Pennsylvania:
sunrise | 5:41 am EST\nsunset | 6:59 pm EST\nduration of daylight | 13 hours 18 minutes

Phase of the Moon:
waning gibbous moon (*)

(Somehow it missed me in the famous birthdays: I guess their database is still incomplete)

It even does simple optimization

min {5 x^2+3 x+12}  =  231/20   at   x = -3/10 (*)

And, in discrete mathematics, it does wonderful things like generate numbers (permutations, combinations, and much more) and even put out a few graphs:
graphs(*)

This is all great stuff.

And it is all owned by Wolfram who define how you can use it. As Groklaw points out, the Wolfram Terms of Service are pretty clear:

If you make results from Wolfram|Alpha available to anyone else, or incorporate those results into your own documents or presentations, you must include attribution indicating that the results and/or the presentation of the results came from Wolfram|Alpha. Some Wolfram|Alpha results include copyright statements or attributions linking the results to us or to third-party data providers, and you may not remove or obscure those attributions or copyright statements. Whenever possible, such attribution should take the form of a link to Wolfram|Alpha, either to the front page of the website or, better yet, to the specific query that generated the results you used. (This is also the most useful form of attribution for your readers, and they will appreciate your using links whenever possible.)

And if you are not academic or not-for-profit, don’t think of using Wolfram|Alpha as a calculator to check your addition (“Hmmm… is 23+47 really equal 70? Let me check with Wolfram|Alpha before I put this in my report”), at least not without some extra paperwork:

If you want to use copyrighted results returned by Wolfram|Alpha in a commercial or for-profit publication we will usually be happy to grant you a low- or no-cost license to do so.

“Why yes it is. I better get filling out that license request!  No wait, maybe addition isn’t a ‘copyrighted result’.  Maybe I better run this by legal.”

Groklaw has an interesting comparison to Google:

Google, in contrast, has no Terms of Use on its main page. You have to dig to find it at all, but here it is, and basically it says you agree you won’t violate any laws. You don’t have to credit Google for your search results. Again, this isn’t a criticism of Wolfram|Alpha, as they have every right to do whatever they wish. I’m highlighting it, though, because I just wouldn’t have expected to have to provide attribution, being so used to Google. And I’m highlighting it, because you probably don’t all read Terms of Use.

So if you use Wolfram|Alpha, be prepared to pepper your work with citations (I have done so, though the link on the Wolfram page says that the suggested citation style is “coming soon”: I hope I did it right and they do not get all lawyered up) and perhaps be prepared to fill out some licensing forms.  And it might be a good idea to read some of those “Terms of Service”.

——————————————–
(*) Results Computed by Wolfram Mathematica.

A Better Random Number Generator?

In operations research, we often use “random” numbers.  Whether we are doing a simulation of a manufacturing facility, generating future economic scenarios in financial optimization, or creating “random” instances to test our algorithms, we use up lots of random numbers.  Terry Pratchett might wonder whether we are using up all the randomness in our need for these values!

For the most part, our “random” numbers are not that at all:  they are numbers generated by a particular formula, the results of which pass certain tests for randomness.    But any random number generator has flaws and patterns.    A “weak” random number generator (think of the “middle square approach” where a number is squared and the middle part is used as the random number, and seed for the next squaring) will quickly show its flaws, while a strong one will hide its patterns longer.

But what if you want true randomness?  Or, to avoid philsophical and physical discussions, as random as rolling a die?  The Dice-O-Matic can come to your rescue!  Able to generate more than a million die rolls per day, the Dice-O-Matic is a great addition to the offices of anyone doing work in simulation.  With a footprint of a mere 18″ by 18″ (albeit 7 feet tall), this certainly looks to be about as random as one could hope.  And with the periodic melting and squashing of dice (due to the ire of gamers annoyed at an outcome), any biases will quickly be eliminated.

I think we should all include this on our next NSF grant proposals.