Lecture on Combinatorial Games (First few chapters of Winning Ways) D. Sleator, October 2001 The Sprague-Grundy theory of impartial games gives an elegant way to evaluate sums of games. The sum of two games means the new game in which a move consists of picking a game to move in, then moving in that game. In Sprague-Grundy theory, we assign a number (call it the nimber) to any position in the game. Then we give an algorithm that takes as input the nimbers of two games, and computes the nimber of sum of these two games. The algorithm, of course, is nim addition. In this lecture we attempt to generalize this theory to "partizan" games (as opposed to impartial games). In these games, there are different sets of moves available for each of the two players. Our goal will be to label such games with numbers, and then give an algorithm that lets us compute the number of the sum of two games. We'll only be partially successful. We will have to restrict the games somewhat in order to get the theory to work. Let G be a game. Let the two players be called Left and Right. If Left were to move in G, left might have several options. Call them G^L_1, G^L_2, ... Let the set G^L = {G^L_1, G^L_2...}. Similarly, Right may have several options G^R = {G^R_1, G^R_2...}. This completely characterizes the game. So we write: G = {G^L | G^R} If G^L is the empty set, then Left loses when Left must move in the game G. Similarly for Right. This is how the outcome of the game is determined. We'll also assume that a game ends in a finite number of moves. The intuition of the labeling (numbering) we come up with will be that the number of a game is the number of moves advantage that Left has over Right in the game. And with the labeling we come up with summing games corresponds to just adding the numbers together. Let N(G) be the number assocated with G. With this as a starting point, let's try to evaluate some games. (The "value" of a game will be the number we assign to it, and by "evaluating" it we mean computing this value.) What's the value of G = { | } ? (This is the game in which neither player has any moves available. So the next player to move loses. If we add this game to any other game, we know that it cannot change the game at all, because it does not change the set of moves available to either player. Therefore we know that N(G) = 0. That is, (omitting N()): { | } = 0 Actually, we can find a whole class of other games that must have value 0. Suppose we have ANY game G in which the next player to move can be forced to lose. It turns out that we must have N(G)=0. Why? Consider any game H. What happens in G+H? Say Left going second in H has a winning strategy in H. Then Left can win in G+H by seeing which game Right moves in, and following the winning strategy in that game. Right will eventually get stuck with no moves, and Left will win. The same thing works if Left has a win going first in H, or Right has a win going first or second in H. Therefore, for our theory to work we know that the outcome of a game is not changed by adding G to it. This implies that N(G)=0. What about the game {0| } ? (Note that we've used "0" to indicate a game of value 0, the simplest example being { | }.) In {0| }, Left can move and leave right looking at { | }, and Right has no move at all. Our guiding intuition that the value should be the number of moves advantage to left indicates that this game should have value 1. {0| } = 1 By symmetry, we must have that: { |0} = -1 We want addition to work, so this requires that: {0| } + { |0} = 0. This is of course, not the same game as { | }. But it does have the property that the first player to move in this game loses. So everything is consistent so far. We can continue to build up games and their values. What about {1| }? This clearly has value 2, because Left can make two moves to Right's zero. In general {n| } = n+1 { |-n} = -n-1 What about the game {0|1}? What must its value be? (Perhaps working this out with hackenbush makes this easier to see.) Consider the following game: G = {0|1} + {0|1} + { |0} What happens if Right moves first? Ignoring identical alternatives, he has two options: {0|1} + {0|1} + { | } --or-- {0|1} + 1 + { |0} That is: {0|1} + {0|1} --or-- {0|1} + 1 + -1 That is: {0|1} + {0|1} --or-- {0|1} In either case, Left wins. (In the choice on the right, Left moves and leaves Right with { | }, and in the left choice Left moves and leaves Right with {0|1}, which loses for Right). Now, going back to G, what happens when Left moves first? Left has only one choice (eliminating symmetry), and leaves the following: { | } + {0|1} + { |0} That is: {0|1} + { |0} Now right has two choices: {0| } + { |0} --or-- {0|1} + { | } = {0|1} The choice on the left lets Right win, the other choice loses for Right, so he avoids it. So game G has the property that whichever player starts that player loses. This means it must have value 0. Let x be the value of {0|1}. We know that x + x + (-1) = 0 Therefore in order for our theory to work we have: 1 {0|1} = --- 2 We can go on this way and derive more and more results like this. For example, you can prove that { | 4} = 0, {3|5} = 4, {3|4} = 3+1/2.... Instead, let's just state the rule that defines these values, and prove the whole theory works with these values. Definition of value of a game: Say G = {G^L|G^R}. Recursively compute the values of each element of G^L, let the set of numbers that results be A. Similarly, compute B from G^R. Now A and B are sets of numbers. Define a = max(A) and b = min(B), with the convention that if A is empty a is -infinity, and if B is empty b is infinity. We require that a0 then we know that one of g or h is positive, so assume wlog that h>0. Therefore hl is at most 1 below h. Therefore hl+g is at most one below g+h. And g+h is the simplest number in the range. An analogous proof works if g+h<0. This completes case 1. Case 2: g is an integer, h is a dyadic number. (Here a dyadic number is a non-integral rational number whose denominator is a power of 2.) g+h is also a dyadic number, with the same simplicity as h. In fact, the interval (hl+g, hr+g) is an identical copy of (hl, hr) when viewed in terms of the simplicities of the numbers in it. Therefore if h is the simplest number in (hl, hr), then g+h must be the simplest number in (g+hl, g+hr). (And the interval that we need to consider may even be smaller -- it must be contained in (gl+h, gr+h) too, but this only further reduces the choices available.) Case 3: g and h are both dyadic numbers with g simpler than h. Actually, the argument in case 2 works here too. Because the interval (hl+g, hr+g) is an identical copy of (hl, hr), as described above. Case 4: g and h are both dyadic numbers of the same simplicity. In this case it turns out that g+h simpler than g and h (in fact it could be an integer). Now the interval (hl+g, hr+g) is an identical copy of (hl, hr) when viewed in terms of the simplicities of the numbers in it --EXCEPT-- that g+h is simpler than g. Therefore, if g was the simplest in its interval, then g+h it's certainly the simplest number in its interval. This completes the proof.