Grizzlies, Pandas, and Optimal Ecological Structures

Last year, a group of students in one of my classes did a project on designing a grizzly bear habitat, inspired by the work at Cornell’s wonderful Institute for Computational Sustainability. In that project, the goal was to pick out a collection of geographic areas that formed a contiguous zone that the grizzly’s could move freely through. As the ICS description says:

Land development often results in a reduction and fragmentation of natural habitat, which makes wildlife populations more vulnerable to local extinction. One method for alleviating the negative impact of land fragmentation is the creation of conservation corridors, which are continuous areas of protected land that link zones of biological significance.

My colleague, Willem van Hoeve, had worked on variants of this problem and had some nice data for the students to work with. The models were interesting in their own right, with the “contiguity” constraints causing the most challenge to the students. The results of the project were corridors that were much cheaper (by a factor of 10) than the estimates of the cost necessary to support the wildlife. The students did a great job (as Tepper MBA students generally do!) using AIMMS to model and find solutions (are there other MBA students who come out knowing AIMMS? Not many I would bet!). But I was left with a big worry. The goal here was to find corridors linking “safe” regions for the grizzlies. But what keeps the grizzlies in the corridors? If you check out the diagram (not from the student project but from a research paper by van Hoeve and his coauthors), you will see the safe areas in green, connected by thin brown lines representing the corridors.    It does seem that any self-respecting grizzly would say:  “Hmmm…. I could walk 300 miles along this trail they have made for me, or go cross country and save a few miles.”  The fact that the cross country trip goes straight through, say, Bozeman Montana, would be unfortunate for the grizzly (and perhaps the Bozemanians).  But perhaps the corridors could be made appealing enough for the grizzlies to keep them off the interstates.

I thought of this problem as I was planning a trip to China (which I am taking at the end of November).  After seeing a picture of a ridiculously cute panda cub (not this one, but similarly cute), my seven-year-old declared that we had to see the pandas.  And, while there are pandas in the Beijing zoo, it was necessary to see the pandas in the picture, who turned out to be from Chengdu.  So my son was set on going to Chengdu.  OK, fine with me:  Shanghai is pretty expensive so taking a few days elsewhere works for me.

As I explored the panda site, I found some of the research projects they are exploring.  And one was perfect for me!

Construction and Optimization of the Chengdu Research Base of Giant Panda Breeding Ecological System

I was made for this project!  First, I have the experience of watching my students work on grizzly ecosystems (hey, I am a professor:  seeing a student do something is practically as good as doing it myself).  Second, and more importantly, I have extensive experience in bamboo, which is, of course, the main food of the panda.  My wife and I planted a “non-creeping” bamboo plant three years ago, and I have spent two years trying to exterminate it from our backyard without resorting to napalm.  I was deep into negotiations to import a panda into Pittsburgh to eat the cursed plant before I finally seemed to gain the upper hand on the bamboo.  But I fully expect the plant to reappear every time we go away for a weekend.

Between my operations research knowledge and my battle-scars in the Great Bamboo Battle, I can’t think of anyone better to design Panda Ecological Systems.  So, watch out “Chengdu Research Base on Giant Panda Breeding”:  I am headed your way and I am ready to optimize your environment.  And if I sneak away with a panda, rest assured that I can feed it well in Pittsburgh.

This is part of the INFORMS September Blog Challenge on operations research and the environment.

Social Networks and Operations Research

Until recently, I pretty well had a handle on my use of social networks.   Rather than try to use a single social networking system in multiple ways, I have used different systems in different ways and for different networks.

  • I have a blog, of course, and I use that to pontificate on various aspects of operations research.  While the communication is primarily one-way, I see this as a network since 1) I have enough regular commentators that I feel there is some two-way conversation, and 2) there is a network of OR bloggers (the “blogORsphere”) and our various posts often riff off each other, particularly now that INFORMS provides a monthly topic for us to use in common (this post is part of the July challenge on OR and social networks).    You can get a feed of all the OR blogs either in the sidebar at my page or through a Google Reader site. If you have an operations research blog and are not included, please let me know!Feedburner and my log files suggest that each post is read about 3000 times in the first week of posting (after that, each post gets a regular trickle of readers through search).
  • I have a twitter account (@miketrick) where 90% of my tweets have some operations research content (denoted by an “#orms” hash tag).  About 10% of the time, I am griping about some failure in customer service or something similar on non-operations research aspects.   When I post on my blog, a tweet automatically goes out through my twitter account. I follow 183 other twitter users and am followed by just over 700 others, most presumably for the #orms content.
  • I have a facebook account (michael.trick).  Again, a post on my blog generates a facebook entry, but I primarily use facebook for my real-life friends and family, and rarely post on operations research (except the blog entries).

And that seemed enough!  But recently, there have been more social networks that I have had to integrate in to my life, and the existing ones have changed.

  • LinkedIn remains a mystery to me.  I certainly have done a fair amount of linking, with 365 direct connections.  Many of these are former students who want to stay connected to me, and I am happy to be connected.  It has even been useful when getting a request like “Do you know anyone at X who can help me with Y”.  And somehow I am getting emails on conversations that are going on at LinkedIn that actually look pretty good.  But when I go to the site, I can never find where those conversations are coming from, and I am just generally overwhelmed with minutia about who has changed their picture and commented on what.  My blog and twitter feed gets mirrored at LinkedIn, but otherwise this is just not something I have been active in.
  • OR-Exchange is  a Question and Answer site that focuses on operations research and analytics.  In many ways, this was a response to the death (or near death: there are some diehards holding on) of the Usenet group sci.op-research.  That group died under the weight of “solution key sellers”, ersatz conference announcements, mean-spirited responses from curmudgeonly long-timers, and general lack of interest.  So I registered the site or-exchange.com at a Q&A site, and started things off.  Since then, the system has taken a life of its own.  INFORMS now hosts it, and there are a dozen or so very active participants along with a larger number of regulars.  I am not sure the system has really reached critical mass, but I am very hopeful for it.
  • Facebook is moving in a direction that might make it more relevant for my professional life.  Bjarni Kristjansson has put together a group “I like operations research” that is getting some traction.  I put together a page that provides the feed to all of the operations research blogs that I can find (this is the same group of entries that is in the sidebar of my blog).
  • Reddit.com is a very popular way to point out links, and “cavedave” has done a great job in putting together a “sysor” subreddit.  With a couple thousand readers, a post there gets a noticeable bump in readership.
  • Google Plus simply baffles me at the moment.  I have an account, but I don’t know how to treat it.  It seems silly to just recreate a twitter feed in plus, but there doesn’t seem to be a hole in my personal social networking activities  that requires plus.  I had already done the “circles” thing by my different uses of facebook, twitter, and my blog, so it is not a great addition.  But I hate to think I am missing out on something big.  On the other hand, I did spend a couple of days on Google Wave, so I am a little hesitant to simply leap on this bandwagon.

As I look through all of this, I can’t help but reflect on how fragmented this all is.  Wouldn’t it be great to have a real community site where all of us in operations research can get together to share thoughts, papers, links, and more?  Bjarni is working hard at pushing INFORMS in the direction of providing such a community site.  But the sad thing is that we had such a site more than ten years ago, when social networking was in its infancy.  ILOG, through people like Irv Lustig, created a site e-optimization.com.  It lasted a couple of years, but could not survive the pressures of the dot-com crash.  INFORMS keeps a snapshot of the site (with limited functionality), and it is still impressive long after it shuttered its doors.

And, as I look closer to all of the activity, I am amazed that there is not more.  Why are there not hundreds of operations research blogs, instead of the couple of dozen that I list?  Why doesn’t every doctoral student in operations research have a twitter account?  Is there a social networking world I am missing?  If not, where is everybody?

Of course, if you are reading this, then you are in my social network, and I am very grateful that you are.

 

Explaining Operations Research to, and being, a Muggle

In J.K. Rowling’s Harry Potter books, “Muggles” are people who have no magical ability, and, indeed, no knowledge of the magical world.  The term “Muggle” is not exactly a compliment, and veers towards a pejorative.  From the first book:

Hagrid: “I’d like ter see a great Muggle like you stop him,”
Harry: “A what?”
Hagrid: “A Muggle. It’s what we call non-magic folk like them. An’ it’s your bad luck you grew up in a family o’ the biggest Muggles I ever laid eyes on.”

It is a little hard to see anything positive about Muggle in this exchange!  Muggles are often willfully blind to the magic that goes on around them (though sometimes an Obliviator or two can help muggles forget something a little too spectacular), and are generally far less interesting than the magical folk.

But Ms Rowling is pretty even handed in its treatment of the magical world/non-magical world divide.  Just like non-magical folk have no understanding of Quidditch, dementors, Patronus charms and the rest, the magical world is equally confused about the non-magical world:

Arthur Weasley: What exactly is a rubber duckie for?

The definition of a Muggle depends on where you stand!

Now what does this have to do with operations research (other than being the theme of this month’s INFORMS Blog Challenge, of which this article forms my entry)?  A wonderful aspect of working in operations research, particularly on the practical side of the field, is that you both work with Muggles and get to be a Muggle.

Working with Muggles is pretty obvious.  We in operations research have these magical powers to do incredible feats.  Have a problem with millions of decisions to make?  Easy!  Well, easy as long as we can assume linear objective and constraints, and that the data is known and fixed, and …. (magic even in Harry Potter’s world has limitations).  But for the Muggles to believe our results, we do have to spend time explaining what we do and the assumptions we work with.  So we trot out some simple examples, like the traveling salesman problem (“Consider a traveling salesman who has to visit the cities on this napkin”:  don’t laugh!  That is the first real conversation I had with the woman who eventually decide to marry me!).  And we explain why the problem is hard (“Consider how many tours there are”).  And sometimes we convince the whole world of the difficulty so well that they don’t listen to the next part:  “Despite the difficulty, we can really solve these models”.  There are whole swathes of the world, including, it seems, much of computer science, that believes that 30 city traveling salesman instances are a true challenge, requiring an immediate application of approximation algorithms or heuristic methods.   But then we solve interesting problems, and the Muggles begin to believe.  And that is very satisfying.

But it gets even better when we in operations research get to be the Muggles.  And this happens a lot on the practical side of operations research because we get to work with a lot of very smart and very interesting people outside of our field.  A few years ago, I worked with the United States Postal Service to redesign its processing and distribution network.  I know a lot about optimization and models and algorithms.  About all I knew about postal delivery is that a very friendly guy comes by practically every day around 11, and he prefers if I park my car back about two feet so he can cut through the driveway easier.  Turns out there is a heck of a lot more to the Postal Service than Postman Pete and his walk through our neighborhood.  So I got to be the Muggle and to learn about the issues in getting mail from one side of the country to the other in a reasonably timely fashion.  There is “First Class Mail” and “Third Class Mail”, but no “Second Class Mail”.  Why?  Well, that’s quite a story, let me tell you!  And after a few months, I felt that I had passed my first class in the Magic of Mail, but was nowhere near losing my Muggle designation.  But I knew enough to create a few models, and I could explain them to the Wizards of the Mail, and they could correct my Muggle understanding of mail processing (“No, no, no:  a flat could never be handled in that way:  that is only for Third Class, silly!”).  And eventually we arrived at models that did a pretty good job of representing the mail system.  I was a bit less of a Muggle about mail, and they were a bit less Mugggley about operations research.

Over the years, I have started as a Muggle about cell-phone production, sports scheduling, voting systems, and a number of other areas.  And I got to read about these areas, and talk to smart people about issues, and, eventually, become, if not a Wizard, then at least a competent student of these areas.

Some fields are, by their nature, inward looking.  The best operations research is not, and that is a true pleasure of the field.

The Appeal of Operations Research and Sports

For a more recent comment on MLB scheduling and the Stephensons see my response to the 30 for 30 video.

The relationship between operations research and sports is one topic that I return to often on this site.    This is not surprising:  I am co-owner of a small sports scheduling company that provides schedules to Major League Baseball and their umpires, to many college-level conferences, and even to my local kids soccer league.  Sports has also been a big part of my research career.  Checking my vita, I see that about 30% of my journal or book chapter papers are on sports or games, and almost 50% of my competitive conference publications are in those fields.  Twenty years ago, my advisor, Don Ratliff, when looking over my somewhat eclectic vita at the time (everything from polymatroidal flows to voting systems to optimization implementation) told me that while it was great to work in lots of fields, it is important to be known for something.  To the extent that I am known for something at this point, it is either for online stuff like this blog and or-exchange, or for my part in the great increase of operations research in sports, and sports scheduling in particular.

This started, as most things in life often do, by accident.  I was talking to one of my MBA students after class (I was younger then, and childless, so I generally took my class out for drinks a couple times a semester after class) and it turned out he worked for the Pittsburgh Pirates (the local baseball team).  We started discussing how the baseball schedule was created, and I mentioned that I thought the operations research techniques I was teaching (like integer programming) might be useful in creating the schedule.  Next thing I know, I get a call from Doug Bureman, who had recently worked for the Pirates and was embarking on a consulting career.  Doug knew a lot about what Major League Baseball might look for in a schedule, and thought we would make a good team in putting together a schedule.  That was in 1996.  It took until 2005 for MLB to accept one of schedules for play.  Why the wait?  It turned out that the incumbent schedulers, Henry and Holly Stephenson were very good at what they did.  And, at the time, the understanding of how to create good schedules didn’t go much beyond work on on to minimize breaks (consecutive home games or away games) in schedules, work done by de Werra and a few others.  Over the decade from 1996-2005, we learned things about what does work and what doesn’t work in sports scheduling, so we got better on the algorithmic side.  But even more important was the vast increase in speed in solving linear and integer programs.  Between improvements in codes like CPLEX and increases in the speed of computers, my models were solving millions of times faster in 2005 than they did in 1996.  So finally we were able to create very good schedules quickly and predictably.

In those intervening years, I didn’t spend all of my time on Major League Baseball of course.  I hooked up with George Nemhauser, and we scheduled Atlantic Coast Conference basketball for years.  George and I co-advised a great doctoral student, Kelly Easton, who worked with us after graduation and began doing more and more scheduling, particularly after we combined the baseball activities (with Doug) and the college stuff (with George).

After fifteen years, I still find the area of sports scheduling fascinating.  Patricia Randall, in a recent blog post (part of the INFORMS Monthly Blog Challenge, as is this post) addressed the question on why sports is such a popular area of application.  She points to the way many of us know at least something about sports:

I think the answer lies in the accessibility of the data and results of a sports application of OR. Often only a handful of people know enough about an OR problem to be able to fully understand the problem’s data and judge the quality of potential solutions. For instance, in an airline’s crew scheduling problem, few people may be able to look at a sequence of flights and immediately realize the sequence won’t work because it exceeds the crew’s available duty hours or the plane’s fuel capacity. The group of people who do have this expertise are probably heavily involved in the airline industry. It’s unlikely that an outsider could come in and immediately understand the intricacies  of the problem and its solution.

But many people, of all ages and occupations, are sports fans. They are familiar with the rules of various sports, the teams that comprise a professional league, and the major players or superstars. This working knowledge of sports makes it easier to understand the data that would go into an optimization model as well as analyze the solutions it produces.

I agree that this is a big reason for popularity. When I give a sports scheduling talk, I know I can simply put up the schedule of the local team, and the audience will be immediately engaged and interested in how it was put together. In fact, the hard part is to get people to stop talking about the schedule so I can get on talking about Benders’ approaches or large scale local search or whatever is the real content of my talk.

But let me add to Patricia’s comments: there are lots of reasons why sports is so popular in OR (or at least for me).

First, we shouldn’t ignore the fact that sports is big business. Forbes puts the value of the teams of Major League Baseball to be over $15 billion, with the Yankees alone worth $1.7 billion. With values like that, it is not surprising that there is interest in using data to make better decisions. Lots of sports leagues around the world also have high economic effects, making the overall sports economy a significant part of the overall economy.

Second, there are a tremendous number of issues in sports, making it applicable and of interest to a wide variety of researchers. I do essentially all my work in scheduling, but there are lots of other areas of research. If you check out the MIT Sports Analytics conference, you can see the range of topics covered. By covering statistics, optimization, marketing, economics, competition and lots of other areas, sports can attract interest from a variety of perspectives, making it richer and more interesting.

A third reason that sports has a strong appeal, at least in my subarea of scheduling, is the close match between what can be solved and what needs to be solved. For some problems, we can solve far larger problems than would routinely occur in practice. An example of this might be the Traveling Salesman Problem. Are there real instances of the TSP that people want to solve to optimality that cannot be solved by Concorde? We have advanced so far in solving the problem, that the vast majority of practical applications are now handled. Conversely, there are problems where our ability to solve problems is dwarfed by the size of problem that occurs in practice. We would like to understand, say, optimal poker play for Texas Hold’em (a game where each player works with seven cards, five of them in common with other players). Current research is on Rhode Island holdem, where there are three cards and strong limitations on betting strategy. We are a long way from optimal poker play.

Sports scheduling is right in the middle. A decade ago, my coauthors and I created a problem called the Traveling Tournament Problem. This problem abstracts out the key issues of baseball scheduling but provides instances of any size. The current state of the art can solve the 10 team instances to optimality, but cannot solve the 12 team instances. There are lots of sports scheduling problems where 10-20 teams is challenging. Many real sports leagues are, of course, also in the 10-20 team range. This confluence of theoretical challenge and practical interest clearly adds to the research enthusiasm in the area.

Finally, there is an immediacy and directness of sports scheduling that makes it personally rewarding. In much of what I do, waiting is a big aspect: I need to wait a year or two for a paper to be accepted, or for a research agenda to come to fruition. It is gratifying to see people play sports, whether it is my son in his kid’s soccer game, or Derek Jeter in Yankee Stadium, and know not only are they there because programs on my computer told them to be, but that the time from scheduling to play is measured in months or weeks.

This entry is part of the March INFORMS Blog Challenge on Operations Research and Sports.

Finding Love Optimally

Like many in operations research, my research interests often creep over into my everyday life. Since I work on scheduling issues, I get particularly concerned with everyday scheduling, to the consternation of my friends and family (“We should have left six minutes ago: transportation is now on the critical path!”). This was particularly true when I was a doctoral student when, by academic design, I was living and breathing operations research 24 hours a day.

I was a doctoral student from ages 22 to 27 (age will be relevant shortly), and like many in that age group, I was quite concerned with finding a partner with whom to spend the rest of my life. Having decided on certain parameters for such a partner (female, breathing, etc.), I began to think about how I should optimally find a wife. In one of my classes, it hit me that the problem has been studied: it is the Secretary Problem! I had a position to fill (secretary, wife, what’s the difference?), a series of applicants, and my goal was to pick the best applicant for the position.

Fortunately, there is quite a literature on the Secretary Problem (for a very nice summary of results, see this site, from which I also got the background to the graphic for this entry), and there are a number of surprising results. The most surprising is that it is possible to find the best secretary with any reasonable probability at all. The hard part is that each candidate is considered one at a time, and an immediate decision must be made to accept or reject the candidate. You can’t go back and say “You know, I think you are the cat’s meow after all”. This matched up with my empirical experience in dating. Further, at each step, you only know if the current candidate is the best of the ones you have seen: candidates do not come either with objective values or with certifications of perfection, again matching empirical observations. You can only compare them with what you have sampled.

Despite these handicaps, if you know how many candidates there are, there is a simple rule to maximize the chance of finding the best mate: sample the first K candidates without selecting any of them, and then take the first subsequent candidate who is the best of all you have seen. K depends on N, the total number of candidates you will see. As N gets big, K moves toward 1/e times N, where e is 2.71…. So sample 36.9% of the candidates, then take the first candidate who is the best you have seen. This gives a 36.9% chance of ending up with Ms (in my case) Right.

One problem here: I didn’t know what N is. How many eligible women will I meet? Fortunately, the next class covered that topic. If you don’t know what N is but know that you will be doing this over a finite amount of time T, then it is OK to replace this with a time cutoff rule: simply take the first candidate after 36.9% of the time (technically, you use 36.9% of the cumulative distribution, but I assumed a uniform distribution of candidate arrivals). OK, I figured, people are generally useless at 40 (so I thought then: the 50-year-old-me would like to argue with that assumption), and start this matching process at about 18 (some seem to start earlier, but they may be playing a different game), so, taking 36.9% of the 22 year gap gives an age of 26.11. That was my age! By a great coincidence, operations research had taught me what to do at exactly the time I needed to do that.

Redoubling my efforts, I proceeded to sample the candidate pool (recognizing the odds were against me: there is still only a 36.9% chance of finding Ms Right) when lo and behold. I met Her: the woman who was better than every previous candidate. I didn’t know if she was Perfect (the assumptions of the model don’t allow me to determine that), but there was no doubt that she met the qualifications for this step of the algorithm. So I proposed.

And she turned me down.

And that is when I realized why it is called the Secretary Problem, and not the Fiancee Problem (though Merrill Flood proposed the problem under that name). Secretaries have applied for a job and, presumably, will take the job if offered. Potential mates, on the other hand, are also trying to determine their best match through their own Secretary Problem. In order for Ms Right to choose me, I had to be Mr. Right to her! And then things get much more complicated. What if I was meeting women in their sampling phase? It did seem that some people were very enthusiastic about having long sampling phases, and none of them would be accepting me, no matter how good a match they would be for me. And even the cutoff of 36.9% looks wrong in this case. In order to have a hope of matching up at all in this “Dual Secretary Problem”, it looked like I should have had a much earlier cutoff, and in fact, it seemed unlikely there was a good rule at all!

I was chagrined that operations research did not help me solve my matching problem. I had made one of the big mistakes of practical operations research: I did not carefully examine the assumptions of my model to determine applicability.

Downcast, I graduated with my doctorate, resolving to marry myself to integer programming. I embarked on a postdoc to Germany.

There, I walked into a bar, fell in love with a beautiful woman, moved in together three weeks later, invited her to live in the United States “for a while”, married her six years after that, and had a beautiful son with her six years ago. I am not sure what optimization model led me down that path, but I think I am very happy with the result.

Some details of this story have been changed to protect me from even more embarrassment. This post is part of the February INFORMS Blog Challenge.

How Operations Research Helps Me Understand Politics and Voting

Over the years, operations research, politics, and voting have intersected often for me. Going back almost 25 years now, I have done research on voting systems. I have blogged on elections, and written about predicting results and presenting results. I have written about political leaders who were trained in operations research, and even countries run on O.R. principles.

Over time, I have been both elated and disillusioned by politics at both the national and local scale. I use what I know about elections when I run committees, and get very frustrated by others running committees without an understanding of the basics of voting theory.

While I will not claim to be an expert on how theory and practice interact in politics and elections, I do have some insights. Many of these are well known to those in voting theory, but some are a little idiosyncratic. Perhaps we can have an election afterward on which is the most useful.

  1. When there are more than two possibilities, Plurality voting is just plain wrong. Plurality voting is what many of us (particularly here in the U.S.) think of as “normal” voting: everyone votes for their favorite and whichever alternative gets the most votes wins. We use this system a lot, and we should essentially never use it. The difficulties it causes with vote-splitting and manipulation are ridiculous. It is plurality voting that causes Republicans to support a left-wing candidate in the U.S. Presidential election (and vice versa for Democrats and a right-wing third party): if the third candidate takes votes away from the Democrat then the Republican has a better chance of winning.I feel strongly enough about this that I cannot be on a committee that uses Plurality voting: I will simply argue about the voting system until it changes (or until they start “forgetting” to tell me about meetings, which is kinda a win-win).
  2. I can live with practically any other voting system. There are quasi-religious arguments about voting systems with zealots claiming one is far superior than all the others. Nothing has convinced me: the cases where these voting systems differ are exactly the cases where it is unclear to me who should be the winner. So whether it is approval voting (like INFORMS uses), or some sort of point system (“Everyone gets six votes to divide among the candidates”), or multi-round systems (“Divide three votes, then we’ll drop the low vote getter and revote”) or whatever, most of it works for me.
  3. The person setting the agenda can have a huge amount of power. While I may be happy with lots of systems, I am hyper-aware of attempts to manipulate the process. Once you give people the chance to think about things, then the agenda-setter (or voting-rule setter) can have an undue amount of power. Of course, if I am the agenda-setter, then knowing lots of voting rules can be quite helpful. Even without knowing committee preferences, it is handy to know that the following rule can help A win in an election with A, B, and C.

    Let’s do B against C then the winner against A.

    That is pretty obviously to A’s advantage (with voters voting their true preferences, A will win unless one of the others is a Condorcet winner — a candidate who could beat every other candidate in a two-person election). Less obvious is

    Let’s do A against B, then A against C, then the winners against each other

    This seems to favor A, but it does not. The only way A can win this election (with truthful voters) is for A to beat both B and C (hence be the Condorcet winner).

    In fact, my research shows that you can arrange for A to win in a four-candidate election no matter what the preferences are provided A is not the Condorcet loser (loses to every other candidate in a pairwise election) and no other candidate is a Condorcet winner. Unfortunately, no committee is willing to sit still for the 12 votes required, beginning

    We’ll go with A against B, then A against C, then take the winners against each other, and take the winner of that against D, then…

    This leads to my favorite voting tree, as in the diagram.

  4. When there is block voting, power is only weakly related to the size of the block. I have in mind systems where different “voters” have different numbers of votes. So, in a parliamentary system with party-line voting, if there are 40 representatives for party A, then party A gets 40 votes. It might seem that if the overall support is 40% for party A, 40% for party B, and 20% for party C, then it would only be fair to give parliamentary seats in that proportion. Unfortunately, if bills need 50% of the vote to pass, “proportional representation” gives undue power to party C. In fact, in this case C has as much power as A or B: two of the three parties are needed to pass any bill. Conversely, if the support is 51%, 48%, 1%, and a 50% rule is used to pass, then the first party has all the power.

    This simple observation has helped me understand the various issues with the recent U.S. Senate vis-a-vis the filibuster rules (which essentially required 60% of the votes to move anything of substance forward): the Senate vacillated between having the Democrats having all the power (51 votes to pass a bill) and having Democrats and Republicans having the same power (60 votes to end a filibuster). With no solution representing reality (either 58% of the Senate seats for the Democrats or perhaps a lower number representing nation-wide party support), the system cannot equate power with support.

    This is seen even more starkly in the election of a single individual like the U.S. President. George Bush in 2004 claimed a “mandate” after winning 51% of the popular vote. While 51% might not seem like a mandate, it is difficult how else to map 51% to a single person.

    Understanding this power relationship makes U.S. Electoral College analysis endlessly fascinating, without adding much insight into whether the Electoral College is a good idea or not.

  5. The push towards and away from the median voter explains a lot about party politics. One fundamental model in economics is the Hotelling Model.  Traditionally this model is explained in terms of ice cream vendors along a stretch of beach.  If there is one vendor, he can set up anywhere on the beach:  he has a monopoly, so no matter where the beach-goers are, they will go to the vendor.  But suppose there are more than one vendor and beach-goers go to the closest vendor. If there are two vendors, the only stable place for them to be (assuming some continuity in the placement of beach-goers) is to have both at the median point, right next to each other!  This seems counter-intuitive:  why aren’t they, say, 1/3 and 2/3 along the beach (for the case of uniformly distributed beach-goers)?  In that case, each vendor gets 1/2 of the customers, but the vendor at 1/3 would say “If I move to 1/2 then I’ll get 5/12 of the customers, which is more than my current 1/3”.  Of course, the vendor at 2/3 also has an incentive to move to the middle.  So they will happily set up next to each other, to the detriment of the beach-goers who must travel farther on average to satisfy their needs.

    How is this related to politics? I believe it gives the fundamental pressures on parties in two-party systems. In the U.S., both the Democrats and Republicans are pressed towards the middle in their efforts to get to the median voter. But the most interesting aspects are the ways in which the political system does not meet the modeling assumptions of the Hotelling model. Here are a couple:

    • The Hotelling Model assumes customers will purchase no matter what distance they need travel to the server. In a political model, voters sufficiently far away from all candidates may simply choose not to participate. While non-participation is often seen as abdicating a role, that need not be the case. Take the case of the “Tea Party Movement”. There are many interpretations of their role in politics, but one threat to the Republicans is a willingness to of the Tea Partiers to simply not participate. This has the effect, in a simplistic left-right spectrum model, to move the median voter to the left. If the Republicans want to move to the resulting median, they would have to hop over the Democrats, something that is simply infeasible (the effort to convince the left wing will take generations to believe the Republicans are really their party). So the threat of non-participation is a strong one, and can only be counteracted by the Republicans by having policies sufficiently appealing to the Tea Partiers to keep them participating. Of course, this rightward movement opens the opportunity for the Democrats to appeal to the crowd in the resulting gap between Democrats and Republicans, though the Democrats undoubtedly face non-participation threats at their own extremes.
    • Another sign of the pressures towards and away from the median occur in the primary/general election form of U.S. politics. During the primaries, a candidate (either local or national) needs to appeal to voters in their party (in most cases). This leads to movement towards the median of a party, particularly if there are only two candidates. Once the candidate has been chosen by the party, though, the candidate is now facing median pressure from the general population. this should result into a movement towards the center, which certainly seems to be the case. Party activists try to stop this move towards the center by forcing pledges or other commitments on candidates, which keep them more towards the median of their own party, perhaps at the expense of general election success.

    The Hotelling Model in politics is a wonderful model: it is wrong but useful. By understanding how the model doesn’t work, we can get insight into how politics does work.

It would be easy to be disillusioned about voting and politics based on theory (and practice, some days). No voting system is fair or nonmanipulable; pressures on candidates force them to espouse views that are not their own; consistency is obviously a foible of a weak mind.

Instead, my better understanding of voting and elections through operations research leaves me energized about voting. However imperfect it is, the system does not need to be mysterious. And a better understanding can lead to better systems.

This topic has been on my list of “to-do”s for a while. I am glad that the Second INFORMS Blog Challenge has gotten me to finally write it!