Click the Deal button to begin.

Here are the rules of the game: you and the computer each get one card and ante $1. You bet first, either $0 or $1. Then the computer gets a chance to match you (if you bet $1) or raise you (if you bet $0). If you bet $0 and the computer raised you, you get a chance to call. Betting $0 when your opponent has already bet $1 means you fold and lose your ante. If no one folds before the end of betting, the higher card wins the pot; that results in a net gain of either $1 or $2, equal to the other player's ante plus the bet of $0 or $1.

One-card poker is a simple game; nonetheless it has many of the elements of more complicated games, including incomplete information, chance events, and multiple stages. And, optimal play requires behaviors like randomization and bluffing. The biggest strategic difference between one-card poker and larger variants such as draw, stud, or hold-em is the idea of hand potential: while 45679 and 24679 are almost equally strong hands in a showdown (they are both 9-high), holding 45679 early in the game is much more valuable because replacing the 9 with either a 3 or an 8 turns it into a straight. In other words, hands need to be evaulated according to both their current value and the possibility that they might become more valuable.

Even the simple game of one-card poker is difficult to solve optimally: the first efficient solution that I know of is presented in [Koller and Pfeffer, 1995]. That paper applies the sequence representation of extensive form games to one-card poker. Before the invention of the sequence form, the standard algorithm was to convert the game to its normal form, which is exponentially large in the size of the deck.

For this page, we are using a single-suit (13-card) deck. With this deck size, the normal form has 2^26 (about 67 million) pure strategies per player. By contrast, the sequence form has only 26 information states and 52 sequences per player. Real-world poker variants are much larger, with many more information states than could possibly fit in memory. Even so, modern techniques (which include tricks like grouping together sets of similar hands and ignoring some of the coupling between very early and very late rounds of betting) can now find approximately-optimal solutions for games like two-player Texas hold-em.

If you play optimally, you should be able to keep your losses down to about 6.4 cents per deal on average. To limit your performance to this level, the computerized second player plays according to the betting tables below. By contrast, if the computer decided whether to bet by flipping a coin without looking at its card, you could win up to 50 cents per deal on average. These tables were computed in Matlab by solving a small linear program generated from the sequence form of the game tree. (Get the source.) The tables are not unique; the answer we compute depends on the details of the linear programming solver we use. (Sam Ganzfried points out, interestingly, that there are other equilibrium strategies that weakly dominate this one (perform better against some opponents without performing worse against any): e.g., in the tables we calculated, player 1 never bets first holding a 5-8. So when we are deciding whether to call a bet, our hands 5-8 are completely equivalent—the opponent must have something lower than a 5 or higher than an 8. Because of this equivalence, the LP solver chose arbitrarily how to distribute the bets we make on these hands. But we can take any bets we'd make on a low hand like a 5 and move them to happen on a higher hand like an 8; doing so doesn't change our performance against this particular opponent, but strictly improves our performance against any opponent who happens to bet first holding 5-8. In particular, since we can only improve our payoff, the new strategy is still an equilibrium; and since there are some opponents where we actually do improve our payoff, the new strategy weakly dominates the current one.)

Here is the betting table for player 2. To use it, look up player 2's card in the column headers. Then choose a row according to whether player 1 bet or passed. The corresponding entry says how often to bet: for example, 0.632 means bet 63.2% of the time.

Holding: | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | T | J | Q | K | A |

On pass: | 1.000 | 1.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

On bet: | 0.000 | 0.000 | 0.000 | 0.251 | 0.408 | 0.583 | 0.759 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

In case you're interested, here is the analogous betting table for player 1. You can play optimally using just this table and a 1000-sided die. The first row says how often to bet in the first round; the second row says how often to bet if you passed in the first round and the computer raised to $1.

Holding: | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | T | J | Q | K | A |

1st round: | 0.454 | 0.443 | 0.254 | 0.000 | 0.000 | 0.000 | 0.000 | 0.422 | 0.549 | 0.598 | 0.615 | 0.628 | 0.641 |

2nd round: | 0.000 | 0.000 | 0.169 | 0.269 | 0.429 | 0.610 | 0.760 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

Here are the betting tables plotted as graphs:

Notice that these strategies contain interesting behaviors such as bluffing (for example, player 2 always bets holding a 2 if player 1 passes) and slow-playing (for example, player 1 doesn't always bet on the first round even if he holds an ace).