## Owning the Podium: Summer 2012 edition

During the last winter Olympics, I had what I thought was a pretty good idea.  There are many ways to rank countries during the Olympics:  you can rank them by total number of medals, or you can rank them by number of gold medals, or by some point scheme (5 for gold, 3 for silver, 1 for bronze) and so on.  Point schemes seem to make sense, but then people argue about points.  Is a gold worth 5 bronzes or 4? Is 2 silvers more than, less than, or the same as a gold?

So my idea was to rank countries by the fraction of reasonable weights that result in them having the highest point count.  Not every point scheme is reasonable:  only a bronze lover (pyropusaphile?) would score bronze higher than gold.  So we need gold >= silver >= bronze.  And it seems unreasonable to have a negative weight on a medal.  Finally, the weights can be scaled so that the total weight is one.

In the Winter 2010 Olympics, Canada was edged out by the United States in the Trick Medal Championship (TMC) by a narrow margin.  Canada had 14 gold, 7 silver, and 5 bronze;  the US went 9, 15, 13.  If you put enough weight on gold, then Canada wins.  But only 45.25% of the reasonable weights put enough weight on Gold for Canada to win;  the US wins for the remaining 54.75% of the weights.

The summer Olympics are now over, with the final medal count being:

US: 46,29,29

China: 38,27,22

Russia: 24,25,33

Great Britain: 29, 17,19

with no other country winning at least 20 medals of a single type.

So the coveted TMC Award goes to …. the United States in a rout!  In fact, the US wins for every reasonable weighting.  Russia could win with a lot of weight on bronze medals, but not if the weight on gold and silver is at least that of bronze.

A necessary and sufficient condition to win for any reasonable weight is to have

1. more gold than anyone else,
2. more gold+silver than anyone else, and
3. more gold+silver+bronze than anyone else.

Equality in any of these can lead to weights where the country ties for the win.

Here, the US meets that condition.  Of course, it helps that there are a zillion medals in swimming (where the US does well) and only one in, say, team handball (described here as water polo without the water, which is only marginally informative).  But a win is a win:  if any representative of the US Olympic teams would like to drop by my office, I will be glad to give them the TMC trophy (which I will be recycling from my stash of high-school curling trophies I have dragged around the world).

P.S. The Wall Street Journal Numbers Guy has a similar discussion though, sadly, it does not include the above approach.

## How Operations Research Helps Me Understand Politics and Voting

Over the years, operations research, politics, and voting have intersected often for me. Going back almost 25 years now, I have done research on voting systems. I have blogged on elections, and written about predicting results and presenting results. I have written about political leaders who were trained in operations research, and even countries run on O.R. principles.

Over time, I have been both elated and disillusioned by politics at both the national and local scale. I use what I know about elections when I run committees, and get very frustrated by others running committees without an understanding of the basics of voting theory.

While I will not claim to be an expert on how theory and practice interact in politics and elections, I do have some insights. Many of these are well known to those in voting theory, but some are a little idiosyncratic. Perhaps we can have an election afterward on which is the most useful.

1. When there are more than two possibilities, Plurality voting is just plain wrong. Plurality voting is what many of us (particularly here in the U.S.) think of as “normal” voting: everyone votes for their favorite and whichever alternative gets the most votes wins. We use this system a lot, and we should essentially never use it. The difficulties it causes with vote-splitting and manipulation are ridiculous. It is plurality voting that causes Republicans to support a left-wing candidate in the U.S. Presidential election (and vice versa for Democrats and a right-wing third party): if the third candidate takes votes away from the Democrat then the Republican has a better chance of winning.I feel strongly enough about this that I cannot be on a committee that uses Plurality voting: I will simply argue about the voting system until it changes (or until they start “forgetting” to tell me about meetings, which is kinda a win-win).
2. I can live with practically any other voting system. There are quasi-religious arguments about voting systems with zealots claiming one is far superior than all the others. Nothing has convinced me: the cases where these voting systems differ are exactly the cases where it is unclear to me who should be the winner. So whether it is approval voting (like INFORMS uses), or some sort of point system (“Everyone gets six votes to divide among the candidates”), or multi-round systems (“Divide three votes, then we’ll drop the low vote getter and revote”) or whatever, most of it works for me.
3. The person setting the agenda can have a huge amount of power. While I may be happy with lots of systems, I am hyper-aware of attempts to manipulate the process. Once you give people the chance to think about things, then the agenda-setter (or voting-rule setter) can have an undue amount of power. Of course, if I am the agenda-setter, then knowing lots of voting rules can be quite helpful. Even without knowing committee preferences, it is handy to know that the following rule can help A win in an election with A, B, and C.

Let’s do B against C then the winner against A.

That is pretty obviously to A’s advantage (with voters voting their true preferences, A will win unless one of the others is a Condorcet winner — a candidate who could beat every other candidate in a two-person election). Less obvious is

Let’s do A against B, then A against C, then the winners against each other

This seems to favor A, but it does not. The only way A can win this election (with truthful voters) is for A to beat both B and C (hence be the Condorcet winner).

In fact, my research shows that you can arrange for A to win in a four-candidate election no matter what the preferences are provided A is not the Condorcet loser (loses to every other candidate in a pairwise election) and no other candidate is a Condorcet winner. Unfortunately, no committee is willing to sit still for the 12 votes required, beginning

We’ll go with A against B, then A against C, then take the winners against each other, and take the winner of that against D, then…

This leads to my favorite voting tree, as in the diagram.

4. When there is block voting, power is only weakly related to the size of the block. I have in mind systems where different “voters” have different numbers of votes. So, in a parliamentary system with party-line voting, if there are 40 representatives for party A, then party A gets 40 votes. It might seem that if the overall support is 40% for party A, 40% for party B, and 20% for party C, then it would only be fair to give parliamentary seats in that proportion. Unfortunately, if bills need 50% of the vote to pass, “proportional representation” gives undue power to party C. In fact, in this case C has as much power as A or B: two of the three parties are needed to pass any bill. Conversely, if the support is 51%, 48%, 1%, and a 50% rule is used to pass, then the first party has all the power.

This simple observation has helped me understand the various issues with the recent U.S. Senate vis-a-vis the filibuster rules (which essentially required 60% of the votes to move anything of substance forward): the Senate vacillated between having the Democrats having all the power (51 votes to pass a bill) and having Democrats and Republicans having the same power (60 votes to end a filibuster). With no solution representing reality (either 58% of the Senate seats for the Democrats or perhaps a lower number representing nation-wide party support), the system cannot equate power with support.

This is seen even more starkly in the election of a single individual like the U.S. President. George Bush in 2004 claimed a “mandate” after winning 51% of the popular vote. While 51% might not seem like a mandate, it is difficult how else to map 51% to a single person.

Understanding this power relationship makes U.S. Electoral College analysis endlessly fascinating, without adding much insight into whether the Electoral College is a good idea or not.

5. The push towards and away from the median voter explains a lot about party politics. One fundamental model in economics is the Hotelling Model.  Traditionally this model is explained in terms of ice cream vendors along a stretch of beach.  If there is one vendor, he can set up anywhere on the beach:  he has a monopoly, so no matter where the beach-goers are, they will go to the vendor.  But suppose there are more than one vendor and beach-goers go to the closest vendor. If there are two vendors, the only stable place for them to be (assuming some continuity in the placement of beach-goers) is to have both at the median point, right next to each other!  This seems counter-intuitive:  why aren’t they, say, 1/3 and 2/3 along the beach (for the case of uniformly distributed beach-goers)?  In that case, each vendor gets 1/2 of the customers, but the vendor at 1/3 would say “If I move to 1/2 then I’ll get 5/12 of the customers, which is more than my current 1/3”.  Of course, the vendor at 2/3 also has an incentive to move to the middle.  So they will happily set up next to each other, to the detriment of the beach-goers who must travel farther on average to satisfy their needs.

How is this related to politics? I believe it gives the fundamental pressures on parties in two-party systems. In the U.S., both the Democrats and Republicans are pressed towards the middle in their efforts to get to the median voter. But the most interesting aspects are the ways in which the political system does not meet the modeling assumptions of the Hotelling model. Here are a couple:

• The Hotelling Model assumes customers will purchase no matter what distance they need travel to the server. In a political model, voters sufficiently far away from all candidates may simply choose not to participate. While non-participation is often seen as abdicating a role, that need not be the case. Take the case of the “Tea Party Movement”. There are many interpretations of their role in politics, but one threat to the Republicans is a willingness to of the Tea Partiers to simply not participate. This has the effect, in a simplistic left-right spectrum model, to move the median voter to the left. If the Republicans want to move to the resulting median, they would have to hop over the Democrats, something that is simply infeasible (the effort to convince the left wing will take generations to believe the Republicans are really their party). So the threat of non-participation is a strong one, and can only be counteracted by the Republicans by having policies sufficiently appealing to the Tea Partiers to keep them participating. Of course, this rightward movement opens the opportunity for the Democrats to appeal to the crowd in the resulting gap between Democrats and Republicans, though the Democrats undoubtedly face non-participation threats at their own extremes.
• Another sign of the pressures towards and away from the median occur in the primary/general election form of U.S. politics. During the primaries, a candidate (either local or national) needs to appeal to voters in their party (in most cases). This leads to movement towards the median of a party, particularly if there are only two candidates. Once the candidate has been chosen by the party, though, the candidate is now facing median pressure from the general population. this should result into a movement towards the center, which certainly seems to be the case. Party activists try to stop this move towards the center by forcing pledges or other commitments on candidates, which keep them more towards the median of their own party, perhaps at the expense of general election success.

The Hotelling Model in politics is a wonderful model: it is wrong but useful. By understanding how the model doesn’t work, we can get insight into how politics does work.

It would be easy to be disillusioned about voting and politics based on theory (and practice, some days). No voting system is fair or nonmanipulable; pressures on candidates force them to espouse views that are not their own; consistency is obviously a foible of a weak mind.

Instead, my better understanding of voting and elections through operations research leaves me energized about voting. However imperfect it is, the system does not need to be mysterious. And a better understanding can lead to better systems.

This topic has been on my list of “to-do”s for a while. I am glad that the Second INFORMS Blog Challenge has gotten me to finally write it!

## Continuing to be impressed with the 28 year old me

In the wake of coming across my network and matching notes, where the 28 year old me taught the 50 year old me some things I never realized I knew, in a way that leaves (the 50 year old) me breathless (at least about the amount of time the 28 year old me had), I was delighted to see a new survey (published in the Communications of the ACM, no less) about complexity and voting systems that says some very nice things about the 28 year old me.    The article, written by Piotr Faliszewski, Edith Hemaspaandra, and Lane Hemaspaandra looks at elections and the well known issue of manipulation of elections by giving untruthful representations of one’s preferences.  The authors survey approaches that use computational complexity to protect elections:

This article is a nontechnical introduction to a startling approach to protecting elections: using computational complexity as a shield. This approach seeks to make the task of whoever is trying to affect the election computationally prohibitive. To better understand the cases in which such protection cannot be achieved, researchers in this area also spend much of their time working for the Dark Side: trying to build polynomial-time algorithms to attack election systems.

I have a particular fondness for this issue, since the 28 year old me (and his coauthors) wrote about this:

This complexity-based approach to protecting elections was pioneered in a stunning set of papers, about two decades ago, by Bartholdi, Orlin, Tovey, and Trick.  The intellectual fire they lit smoldered for quite a while, but in recent years has burst into open flame.

Wow!  Now that is a paragraph to send off to my Dean!  I have written about how this approach was ignored for a long while, but is now pretty popular.  It is great to see this coverage in an outlet like CACM.

The article goes on to survey this area, and has lots of nice things to say about those 20 year old papers (“seminal” (twice!), “striking insight”).  And it makes a great case for all the work that has been done since, and issues going forward.  Thanks Piotr, Edith, and Lane:  I enjoyed the survey very much.  And my Dad is visiting, and he too loved reading about what my coauthors and I had done.

From the 50 year old me to the 28 year old me:  “That was pretty good work!”  And, if I can speak for the 28 year old me:  “Thanks!  What are you doing now?” 50 year old me: “Hmmm….. can I tell you about sports scheduling? Blogging? Or perhaps I should introduce you to Alexander?”.  28 year old me: “Wasn’t I the one who met Ilona, with whom you have Alexander?”  50 year old me: “OK, we’ll split that one!”

Different phases, different measures.  But I am impressed with the 28 year old me every time I run across him.

## Reading “Numbers Rule” by Szpiro

It is Labor Day weekend here in the US, so, in addition to the mandatory grilling and bicycling with Alexander, I have some time to laze about on the porch reading books (and drinking beer, which is also legally required in my jurisdiction).  I have been reading Numbers Rule by George Szpiro.  This is a fascinating book on the history of thought about voting rules, starting back with Plato and continuing with Pliny the Younger, Llull, Cusanus, Borda, Condorcet, Laplace, Dodgson, and ending with the impossibility results of  Arrow, Gibbard, and Satterthwaite.  Interleaved are a few chapters on the problem of allocating seats in a parliament.

I have done work in this area (and Bartholdi, Tovey and Trick even make an appearance on page 115) but I don’t consider myself a specialist.  Even specialists, however, might learn something on the history of the field from this book.  The Llull-Casanus period (roughly 1200 A.D. to 1450 A.D.) in particular was new to me.  This pre-Renaissance period was not one that generally generated a lot of deep insights into non-religious issues (in the Western world), but voting was one area that was of great interest to the ecclesiasticals, most notably in the election of the Pope, but also in electing people for lesser positions such as abbot.

Voting seems to be an easy problem:  we do it all the time.  “How many people want A?  How many people want B?  OK, A wins” is certainly used in our three-person household.  But voting is much harder when there are more than two possible outcomes.  With A, B, and C as possibilities, having each elector vote for one and then taking the one with the most votes (plurality elections) leads to all sorts of bad outcomes.  For instance, it is arguable that having Nader in the 2000 election with Bush and Gore (in the U.S. Presidential election) led to Bush winning while without Nader, Gore would have won.  This is an example of violation of “Independence of Irrelevant Alternatives”:  shouldn’t an election result be consistent whether or not a third (or fourth or fifth) candidate enters?  In other words, if A beats B when only the two run, if C also runs then it should be that either C wins or A continues to win.  Stands to reason! But plurality voting is terrible with respect to this condition, so “splitting the opposition” is a standard way to strategically manipulate such an election.

The book makes it clear that issues of fairness in elections with more than two candidates go back to Greek times.  There have been two main approaches in getting beyond plurality voting.   In both cases, electors rank all of the candidates (unlike in plurality where only the most-preferred candidate is listed).  In the first method, candidates get points based on their placements.  For a 4 candidate election, every “first place” vote is worth 4, every second place vote is 3, and so on.  The candidate with the most votes wins.  In the second approach, the pairwise matchups are analyzed and the person who would win the most pairwise elections is deemed the overall winner.  Ideally, there is a candidate who would win against any other candidate in a pairwise election, and that person is a natural choice for winner (and a choice that plurality is not guaranteed to choose).  Such a candidate is known as a Condorcet winner.

I had always credited the foundation of these two approaches to Borda and Condorcet respectively.  Both lived in the 1700s in France, Borda being a mathematician and military officer, Condorcet being a “pure” scientist and government official.  But the real credit for these approaches should really go to Casanus and Llull four hundred years earlier.  Numbers Rule gives a very good description of their work and all that was new to me.

One aspect of Numbers Rule that I really like is the brief biographical summary at the end of every chapter. Every chapter is based on one (or a few) key figures.  Rather than try to weave the biography of that person in with the description of their findings in voting, only the key features are in the main text, while the biographical summary provides a deft summary of the rest of their lives.  The people came alive through those summaries, but extraneous details did not hinder the main exposition.

The book is non-mathematical, in the that the very few equations are exiled to chapter appendices, but it is analytical in the sense that concepts are clearly and completely described.  There is no hand-waving or “This is neat but really too hard for you”.  Even NP-Completeness gets covered in a reasonable manner (in the context of my own work).

It is only in the coverage of my own work that I really disagree with the author.  Briefly, Charles Dodgson (better known as Lewis Carroll of Alice in Wonderland fame) proposed that the winner of an election should be the one who becomes the Condorcet winner with the fewest number of changes to the electors’ ballots.  Bartholdi, Tovey and I proved that determining the Dodgson winner is NP-Hard.  Szpiro writes that this result was the “death knell” of Dodgson’s Rule, which I think vastly overstates the case.  We solve NP-Hard problems all the time, through integer programming, constraint programming, dynamic programming, and practically any other -programming you like.  There are very, very few practical elections for which we could not determine the winner of the election in a reasonable amount of time (exceptions would be those with a vast number of candidates).  In my mind, the main problem with NP-Hard voting rules is that the losers cannot be succinctly convinced that they really lost.  Without a co-NP characterization, losers have to be content with “The computer program I wrote says you lost”, which is unlikely to be satisfying.  But I don’t think Dodgson’s rule is dead and I certainly don’t think I killed it!

Operations research comes out very well in the book.  In addition to accurately describing Bartholdi, Tovey and me as Professors of Operations Research (kinda accurately:  I was a doctoral student when the paper was written but an Assistant Professor at CMU when it was published), OR takes a star turn on page 189 when the work of Michel Balinski is described.  Here is part of the description:

One of Balinski’s areas of expertise was integer programming, a branch of operations research.  Developed before, during and after World War II, operations research originated in the military where logistics, storage, scheduling and optimization were prime considerations.  But it soon acquired enormous importance in many other fields, for example in engineering, economics and business management.  While game theory, developed at the same time, was mainly of theoretical interest, operations research was immediately applied to practical problems.  Whenever something needed to be maximized or minimized – optimized for short – and resources were constrained, operations research offered the tools to do so.

What a lovely paragraph!

If you have any interest in learning about why voting and apportionment are not straightforward, and want a readable, history-oriented book on approaches to these problems, I highly recommend Numbers Rule:  reading it has been a great way to spend a lazy weekend.

## Algorithmic Voting Theory, Venice, and a Talk on Old/New Papers

I just got back from Venice, where I attended a conference on Algorithmic Decision Theory.  This is a new conference series (optimistically numbered the first conference, implying at least a second) revolving around issues in uncertainty in decision making, preference solicitation, learning and other issues.  From the conference web page:

A new unique event aiming to put together researchers and practitioners coming from different fields such as Decision Theory, Discrete Mathematics, Theoretical Computer Science and Artificial Intelligence in order to improve decision support in the presence of massive data bases, combinatorial structures, partial and/or uncertain information and distributed, possibly inter-operating decision makers. Such problems arise in several real-world decision making problems such as humanitarian logistics, epidemiology, risk assessment and management, e-government, electronic commerce, and the implementation of recommender systems.

This area has been very active, particularly in computer science where there are  a host of applications.

I was asked to give one of the invited talks, and spoke on “An Operations Research Look at Voting”.  I was a little worried about giving this talk, since my main qualifications come from papers published twenty years ago.  When I was a doctoral student at Georgia Tech, I worked with John Bartholdi and Craig Tovey on computational issues in voting.  Being the first to look at those issues, we got to prove some of the basic results in the area.  These include

1. For some natural voting rules, it is NP-hard to determine who the winner is.
2. For some natural voting rules, it is NP-hard to determine how to manipulate the rule (where manipulation means misrepresenting your preferences so as to get a preferred outcome).
3. For some natural voting rules, optimally using the powers of a chair to form subcommittees or otherwise manipulate the voting process is NP-hard.

We published this work in Social Choice and Welfare (after getting it rejected from more mainstream economics journals) where … it was soundly ignored for 15 years.  No one referred to the papers; no one followed up on the papers;  no one cared about the papers at all!

This work was my job talk in 1987/88, and it got me a wonderful job here at the Tepper School (then GSIA), Carnegie Mellon.  And, without Google Scholar, it was not obvious that the papers were being ignored, so they added to my vita, and made it a bit easier to pass through the various steps.

But I did like the work a lot, and regretted (and still regret) that my economist colleagues did not take computational limits more seriously in their models.

But then something amazing happened about 5 years ago:  people started referring to the papers!  The references were mainly in computer science, but at least the papers were being recognized. The counts of these papers in the Web of Science (formerly Science Citation Index) are particularly striking.  In the associated graph, the x-axis is years since publication;  the y-axis is the number of references in Web of Science in that year (Google scholar numbers are higher of course, but there is a bias towards more recent papers there).  In my talk, I compare that graph to my “normal” papers, which reach a peak after 4 or 5 years then decrease.   It is really gratifying to see the interest in these papers along with all the really cool new results people are getting.

I closed off the talk with some work I have done recently on creating voting trees.  Suppose there are three candidates, “a”, “b”, and “c”, and you really like candidate “a”.  Many voting systems proceed as a series of comparisons between two alternatives (think of standard parliamentary procedure).  If you are the chairperson, you might try to bring the candidates forward so as to increase the chances of “a” winning.  In fact, if you set the agenda to be “b” against “c” and the winner against “a”, then “a” will win as long as “a” beats someone (and no one beats everyone).  In this problem, the goal is to do the manipulation without knowing other voters’ preferences.

Can you do this for four candidates?  If you want “a” to win, “a” must be in the top cycle: the group of candidates (perhaps the entire set of candidates) who all beat all the other candidates.  The “Condorcet winner” is the minimal top cycle:  if some candidate beats all the other candidates one-on-one, then that candidate must win any voting tree it occurs in.  So, assuming “a” is in the top cycle, can you create a voting  tree so that “a” wins with four candidates?  The answer is yes, but it is a bit complicated:  first “a” goes against “c” with the winner against “d” then the winner against “b” who plays the winner of (“a” goes against “b” with the winner against “d” …) ….  actually, the minimum tree has 14 leaves!  I am biased, but I think the tree is beautiful, but it goes to show how hard it is to manipulate agendas without knowledge of others’ preferences.  I am in the process of generating the trees on 4 candidates:  there is a very natural rule (“Copeland winner with Copeland loser tie break”:  see the presentation for definitions) that requires more than 32 leaves (if an implementation tree for it exists).

Sanjay Srivastava and I made a conjecture almost 15 years ago that would imply that this sort of manipulation would be possible no matter how many candidates.  Little progress has been made but I think it is still a great problem (the economists know this as implementation by backwards induction and characterizing rules implementable on trees is an important problem in social choice/mechanism design).

If you want more details on all this, here are my slides.  The references are

• Small Binary Voting Trees M.A. Trick, Small Binary Voting Trees, First International Workshop on Computational Social Choice,
Amsterdam, Netherlands, 500-511 (2006).
• Sophisticated Voting Rules S. Srivastava and M.A. Trick, Sophisticated voting rules: The two tournament case, Social Choice and
Welfare, 13: 275-289 (1996).
• How hard is it to control an election?, with C. A. Tovey and M. A. Trick. A slightly revised version of this appeared in Mathl. Comput. Modelling (Special Issue on Formal Theories of Politics) 16(8/9):27-40 (1992).
• The computational difficulty of manipulating an election, J.J. Bartholdi, C. A. Tovey and M. A. Trick (1989); Social Choice and Welfare 6:227-241 (1989).
• Voting schemes for which it can be difficult to tell who won the election, J.J. Bartholdi, C. A. Tovey and M. A. Trick; Social Choice and Welfare 6:157-165 (1989).

Oh, and Venice is a beautiful place to visit.  But you might have heard that elsewhere.

## Michel Balinski IFORS Distinguished Lecture

The IFORS Distinguished Lecturer for the INFORMS meeting was Michel Balinski of Ecole Polytechnique and CNRS, Paris. Michel spoke on “One-Vote, One-Value: The Majority Judgement”, a topic close to my heart. In the talk, Michel began by discussing the pitfalls of standard voting (manipulation, “unfair” winners, and so on). He then spent most of his talk on a method he proposes for generating rankings and winners. For an election on many candidates (or a ranking of many gymnasts, or an evaluation of many wines: the applications are endless), have the electors (judges, etc.) rate each candidate on a scale using terms that are commonly understood. So a candidate for president might be “Excellent, Very Good, Good, Acceptable, Reject”. Then, the evaluation of a candidate is simply the median evaluation of the electors. The use of median is critical: this limits the amount of manipulation a voter can do. If I like a candidate, there is limited effect if I greatly overstate my liking: it cannot change the overall evaluation unless my evaluation is already under that of the median voter.

Michel then went on and discussed some tiebreaking rules (to handle the case that two or more candidates are, say “Very Good” and none “Excellent”). I found the tie-breaking rules less immediately appealing, but I need to think about these more.

Michel had done an experiment on this by asking INFORMS participants to do an evaluation of possible US Presidential candidates (not just Obama and McCain, but also Clinton, Powell, and a number of others). The result (on a small 129 voter sample) put Obama well ahead, but I do suspect some selection bias at work.

This work will be the basis of a book to be published at the end of the year, and there is a patent pending on the voting system (which I found a little strange: what would it mean to use a patented voting system?).

I didn’t get the URLs at the end of the talk.  If anyone got them, can you email me with them?  A quick web search only confused me more.

Thanks Ashutosh for this pointer.

Added Oct 20. Michel Balinski kindly wrote and provided the following references:

Michel Balinski and Rida Laraki, “Le jugement majoritaire : l’expérience d’Orsay,” Commentaire no. 118, été 2007, pp. 413-419.

One-Value, One-Vote: Measuring, Electing, and Ranking (tentative title), to appear 2009.

http://ceco.polytechnique.fr/jugement-majoritaire.html

Michel Balinski et Rida Laraki, A theory of measuring, electing and ranking,
Proceeding of the National Academy of Sciences USA
, May 22, 2007, vol. 104, no. 21, pp. 8720-8725.

Michel Balinski et Rida Laraki, “Election by Majority Judgement: Experimental Evidence.”
Cahier du Laboratoire d’Econométrie de l’Ecole Polytechnique, December 2007, n° 2007-28