Skip to content

Poker and Operations Research

I just attended a fantastic talk by Michael Bowling of the University of Alberta entitled “AI After Dark: Computers Playing Poker” where he described work by the U of A Computer Poker Research Group. Like most artificial intelligence research (particularly in games), there is no veneer of trying to mimic the human brain in this work: it is all about algorithms to compute optimal strategies, which puts it solidly within operations research by my view.

Much of his talk revolved around a recent “Man-Machine Poker Championship” where a program they have developed played against two professional poker players: Phil “The Unabomber” Laak and Ali Eslami. Laak is known from TV appearances; I haven’t seen Eslami, but he has a computer science background and understands the work the U of A researchers do, so that might make him an even more formidable opponent. The results were, at one level, inconclusive. The humans (playing in “duplicate” poker) won two of the four 500 deal matches, lost one, and one was tied. The overall amount of money “lost” by the computer was very small. I have no doubt that most humans facing professionals would have lost a lot more. So having a competitive program is already a big step. Like most who lose at poker, Michael claimed “bad luck” but he has the technical skills to correct for the luck, and was pretty convincing that the computer outplayed the humans.

One interesting aspect is that the program does no “opponent analysis” yet, though that is an extremely active research area (75% of the U of A’s efforts, Michael said). Given a couple more years, I am pretty confident that these programs will start giving the pros a run for their money. Michael said that one goal of their work could be stated: they want to make Phil Helmuth cry. That seems a little less likely.

On the technical side, the presentation concentrated on some new ways for systems to learn to solve huge (1000000000000 state) extensive form games. They have a neat system for having systems learn by playing against themselves. It takes a month of serious computation to tune the poker player, but the method may have other applications in economics. Check out Michael’s publications for more information.

Definitely one of the best talks I have heard in a long time!

{ 1 } Comments

  1. Gary Carson | March 12, 2008 at 7:13 pm | Permalink

    Phil Laak is (was?) a mechanical engineer, so both opponents probably have the background to understand what they’re doing at U. of Alberta

    Before the days of online poker for money there was an internet poker site (IRC) which an early version of the U. of Alberta bot played at. A lot of poker players played against it in full games (it does better heads up than in full games), including Chris Ferguson (a WSOP main event winner who has a Phd In Computer Science and who won this years NBC heads up invitational event), and myself (who has an MS in IE/MS and has written a couple of books on poker).

    I have no doubt that the U. of Alberta bot will eventually be a World Champ heads up player. Heads up opponent modeling isn’t that important. I have my doubts about how well it will do in a full game though.

{ 2 } Trackbacks

  1. […] Micheal Trick has a post about a presentation he attended on the heads up match between PokerBot and a couple of pros a few months back.  I made a comment on his blog. […]

  2. […] Poker and Operations Research […]